Discussion about Primecoin and its infra. Primecoin is a very innovative cryptocurrency, being the 1st non Hash-Cash PoW crypto, naturally scarce (not artificially), with very fast confirmations (1min), elastic readjusting reward & a useful mining (byproducts are primes). Primecoin is sustainable (miners are guaranteed to have revenues), and decentralized (ASIC/FPGA are not particularly advantaged). Sidechain for decentralized data applications (e.g. Storj) currently in development.
Vertcoin was created in 2014. It is a direct hedge against long term mining consensus centralization on the Bitcoin mining network. Vertcoin achieves its mining consensus solely through Graphics Cards as they are the most abundant / widely available consensus devices that produce a reasonable amount of hashrate. This is done using a mining algorithm that deliberately geared against devices like ASICs, FPGAs and CPUs (due to botnets) making them extremely inefficient. Consensus distribution over time is the most important aspect of a blockchain and should not be taken lightly. It is critical that you understand what blockchain specifications mean/do to fully understand Vertcoin.
When users of our network send each other Vertcoin, their transactions are secured by a process called mining. Miners will compose a so-called block out of the pending transactions, and need to perform a large number of computations called hashes in order to produce the Proof-of-Work. With this Proof-of-Work, the block is accepted by the network and the transactions in it become confirmed. Mining is essentially a race. Whoever finds a valid Proof-of-Work and gets the block propagated over more than half of the Vertcoin network first, wins this race and is allowed to reward themselves with the block reward. The block reward is how new Vertcoin come in circulation. This block reward started at 50 VTC when Vertcoin was launched, and halves every four years. The current block reward is 25 VTC. Vertcoin's One Click Miner: https://github.com/vertcoin-project/One-Click-Minereleases Learn more about mining here: https://vertcoin.org/mine/ Specification List: · Launch date: Jan 11, 2014 · Proof-Of-Work (Consensus Mechanism) · Total Supply: 84,000,000 Vertcoin · Preferred Consensus Device: GPU · Mining Algorithm: Lyra2REv3 (Made by Vertcoin) · Blocktime: 2.5 minutes · SegWit: Activated · Difficulty Adjustment Algorithm: Kimoto Gravity Well (Every Block) · Block Halving: 4 year interval · Initial Block Reward: 50 coins · Current Block Reward: 25 coin More spec information can be found here: https://vertcoin.org/specs-explained/
Why Does Vertcoin Use GPUs Then?
ASIC’s (Manufactuer Monopoly) If mining were just a spade sure, use the most powerful equipment which would be an ASIC. The problem is ASICs are not widely available, and just happen to be controlled by a monopoly in China. So, you want the most widely available tool that produces a fair amount of hashrate, which currently manifests itself as a Graphics Card. CPUs would be great too but unfortunately there are viruses that take over hundreds of thousands of computers called Botnets (they’re almost as bad as ASICs).
Mining In Pools
Because mining is a race, it’s difficult for an individual miner to acquire enough computational power to win this race solo. Therefore there’s a concept called pool-mining. With pool-mining, miners cooperate in finding the correct Proof-of-Work for the block, and share the block reward based on the work contributed. The amount of work contributed is measured in so-called shares. Finding the Proof-of-Work for a share is much easier than finding it for a block, and when the cooperating miners find the Proof-of-Work for the block, they distribute the reward based on the number of shares each miner found. Vertcoin always recommends using P2Pool to keep mining as decentralized as possible. How Do I Get Started? If you want to get started mining, check out the Mine Vertcoin page.
Vertcoin just forked to Lyra2REv3 and we are currently working on Verthash
Verthash is and was under development before we decided to hard fork to Lyra2REv3. While Verthash would’ve resulted in the same effect for ASICs (making them useless for mining Vertcoin), the timeline was incompatible with the desire to get rid of ASICs quickly. Verthash is still under development and tries to address the outsourcability problem. Verthash is an I/O bound algorithm that uses the blockchain data as input to the hashing algorithm. It therefore requires miners to have all the blockchain data available to them, which is currently about 4 GB of data. By making this mining data mandatory, it will become harder for auto profit switching miners — like the ones that rent out their GPU to Nicehash — because they will need to keep a full node running while mining other algorithms for the moment Verthash becomes more profitable — the data needs to be available immediately since updating it can take a while. Over the past month, we have successfully developed a first implementation of Verthash in the Vertcoin Core code base. Within the development team we have run a few nodes on Testnet to test the functionality — and everything seems to work properly. The next step is to build out the GPU miners for AMD and Nvidia. This is a NOETA at the moment, since we’re waiting on GPU developers which are in high demand. Once the miners are ready, we’ll be releasing the Vertcoin 0.15 beta that hardforks the testnet together with the miners for the community to have a testrun. Given the structural difference between Lyra2RE and Verthash, we’ll have to run the testnet for a longer period than we did with the Lyra2REv3 hard fork. We’ll have to make sure the system is reliable before hardforking our mainnet. So the timeline will be longer than with the Lyra2REv3 hard fork. Some people in the community have voiced concerns about the fact that Verthash development is not being done “out in the open”, i.e.: the code commits are not visible on Github. The main two reasons for us to keep our cards to our chest at this stage are: (1) only when the entire system including miners has been coded up can we be sure the system works, we don’t want to release preliminary stuff that doesn’t work or isn’t secure. Also (2) we don’t want to give hardware manufacturers or mining outsourcing platforms a head start on trying to defeat the mechanisms we’ve put in place.
Hello, I’ve been trying to decide on a FPGA development board, and have only been able to find posts and Reddit threads from 4-5 years ago. So I wanted to start a new thread and ask about the best “mid-range” FGPA development board in 2018. (Price range $100-$300.) I started with this Quora answer about FPGA boards, from 2013. The Altera DE1 sounded good. Then I looked through the Terasic DE boards. Then I found this Reddit thread from 2014, asking about the DE1-SoC vs the Cyclone V GX Starter Kit: https://www.reddit.com/FPGA/comments/1xsk6w/cyclone_v_gx_starter_kit_vs_de1soc_board/ (I was also leaning towards the DE1-SoC.) Anyway, I thought I better ask here, because there are probably some new things to be aware of in 2018. I’m completely new to FPGAs and VHDL, but I have experience with electronics/microcontrollers/programming. My goal is to start with some basic soft-core processors. I want to get some C / Rust programs compiling and running on my own CPU designs. I also want to play around with different instruction sets, and maybe start experimenting with asynchronous circuits (e.g. clock-less CPUs) Also I don’t know if this is possible, but I’d like to experiment with ternary computing, or work with analog signals instead of purely digital logic. EDIT: I just realized that you would call those FPAAs, i.e. “analog” instead of “gate”. Would be cool if there was a dev board that also had an FPAA, but no problem if not. EDIT 2: I also realized why "analog signals on an FPGA" doesn't make any sense, because of how LUTs work. They emulate boolean logic with a lookup table, and the table can only store 0s and 1s. So there's no way to emulate a transistor in an intermediate state. I'll just have play around with some transistors on a breadboard. UPDATE: I've put together a table with some of the best options:
A very simple FPGA development board that plugs into a Raspberry Pi, so you have a "backup" hard-core CPU that can control networking, etc. Supports a huge range of pmod accessories. You can write a program/circuit so that the Raspberry Pi CPU and the FPGA work together, similar to a SoC. Proprietary bitstream is fully reverse engineered and supported by Project IceStorm, and there is an open-source toolchain that can compile your hardware design to bitstream. Has everything you need to start experimenting with FPGAs.
Xilinx Zynq 7-Series SoC - ARM Cortex-A9 processor, and Artix-7 FPGA. 125 IO pins. 1GB DDR2 RAM. Texas Instruments WiLink 8 wireless module for 802.11n Wi-Fi and Bluetooth 4.1. No LEDs or buttons, but easy to wire up your own on a breadboard. If you want to use a baseboard, you'll need a snickerdoodle black ($195) with the pins in the "down" orientation. (E.g. The "breakyBreaky breakout board" ($49) or piSmasher SBC ($195)). The snickerdoodle one only comes with pins in the "up" orientation and doesn't support any baseboards. But you can still plug the jumpers into the pins and wire up things on a breadboard.
Has one of the latest Xilinx SoCs. 2 GB (512M x32) LPDDR4 Memory. Wi-Fi / Bluetooth. Mini DisplayPort. 1x USB 3.0 type Micro-B, 2x USB 3.0 Type A. Audio I/O. Four user-controllable LEDs. No buttons and limited LEDs, but easy to wire up your own on a breadboard
Xilinx Zynq 7000 SoC (ARM Cortex-A9, 7-series FPGA.) 1 GB DDR3 RAM. A few switches, push buttons, and LEDs. USB and Ethernet. Audio in/out ports. HDMI source + sink with CEC. 8 Total Processor I/O, 40 Total FPGA I/O. Also a faster version for $299 (Zybo Z7-20).
Same as DE10-Standard, but not as many peripherals, buttons, LEDs, etc.
icoBoard ($100). (Buy it here.) The icoBoard plugs into a Raspberry Pi, so it's similar to having a SoC. The iCE40-HX8K chip comes with 7,680 LUTs (logic elements.) This means that after you learn the basics and create some simple circuits, you'll also have enough logic elements to run the VexRiscv soft-core CPU (the lightweight Murax SoC.) The icoBoard also supports a huge range of pluggable pmod accessories:
numato Mimas A7 ($149). An excellent development board with a Xilinx Artix 7 FPGA, so you can play with a bigger / faster FPGA and run a full RISC-V soft-core with all the options enabled, and a much higher clock speed. (The iCE40 FPGAs are a bit slow and small.)
I ordered a iCE40-HX8K Breakout Board to try out the IceStorm open source tooling. (I would have ordered an icoBoard if I had found it earlier.) I also bought a numato Mimas A7 so that I could experiment with the Artix 7 FPGA and Xilinx software (Vivado Design Suite.)
What can I do with an FPGA? / How many LUTs do I need?
VexRiscv is "A FPGA friendly 32 bit RISC-V CPU implementation." This is a RISC-V implementation written in SpinalHDL. VexRiscv has a lot of plugin and configuration options. The Murax SoC is a very light SoC that can run on an iCE40-HX8k (but probably not the 1k FPGA that only has 1,280 LUTs). The Briey SoC only runs on Xilinx or Altera FPGAs.
Sup-sup Monteros! Here is report from XMR.RU-team! The whole XMR.RU team is thankful to you for your support and donations that help to disseminate relevant information about Monero. The following articles were translated into Russian and posted not only on XMR.RU but also on Bitcointalk, Forum.Bits.Media, Golos.io, Steemit, Medium and Facebook:
--- I also want to remind you about the wonderful channel of our wonderful u/v1docq47 where you can watch video news about Monero https://www.youtube.com/channel/UChZc5PLsbP5zeFrmOYMKGmA/videos Few of you understand Russian, but I think it is not difficult to subscribe to the channel and put a couple of likes, and this will help to spread Monero among Russian-speaking users in the future. --- Who we are? Group of Monero enthusiasts from Ukraine and Russia. What are we doing? We spread the word about Monero for the whole CIS. You can support us, so we can translate more interesting stuff about Monero. XMR: 42CxJrG1Q8HT9XiXJ1Cim4Sz18rM95UucEBeZ3x6YuLQUwTn6UWo9ozeA7jv13v8H1FvQn9dgw1Gw2VMUqdvVN1T9izzGEt BTC: 1FeetSJ7LFZeC328FqPqYTfUY4LEesZ5ku --- Here you can see for what all donations are spent on. ;-) Cheers!
Technical Cryptonight Discussion: What about low-latency RAM (RLDRAM 3, QDR-IV, or HMC) + ASICs?
The Cryptonight algorithm is described as ASIC resistant, in particular because of one feature:
A megabyte of internal memory is almost unacceptable for the modern ASICs.
EDIT: Each instance of Cryptonight requires 2MB of RAM. Therefore, any Cryptonight multi-processor is required to have 2MB per instance. Since CPUs are incredibly well loaded with RAM (ie: 32MB L3 on Threadripper, 16 L3 on Ryzen, and plenty of L2+L3 on Skylake Servers), it seems unlikely that ASICs would be able to compete well vs CPUs. In fact, a large number of people seem to be incredibly confident in Cryptonight's ASIC resistance. And indeed, anyone who knows how standard DDR4 works knows that DDR4 is unacceptable for Cryptonight. GDDR5 similarly doesn't look like a very good technology for Cryptonight, focusing on high-bandwidth instead of latency. Which suggests only an ASIC RAM would be able to handle the 2MB that Cryptonight uses. Solid argument, but it seems to be missing a critical point of analysis from my eyes. What about "exotic" RAM, like RLDRAM3 ?? Or even QDR-IV?
QDR-IV SRAM is absurdly expensive. However, its a good example of "exotic RAM" that is available on the marketplace. I'm focusing on it however because QDR-IV is really simple to describe. QDR-IV costs roughly $290 for 16Mbit x 18 bits. It is true Static-RAM. 18-bits are for 8-bits per byte + 1 parity bit, because QDR-IV is usually designed for high-speed routers. QDR-IV has none of the speed or latency issues with DDR4 RAM. There are no "banks", there are no "refreshes", there are no "obliterate the data as you load into sense amplifiers". There's no "auto-charge" as you load the data from the sense-amps back into the capacitors. Anything that could have caused latency issues is gone. QDR-IV is about as fast as you can get latency-wise. Every clock cycle, you specify an address, and QDR-IV will generate a response every clock cycle. In fact, QDR means "quad data rate" as the SRAM generates 2-reads and 2-writes per clock cycle. There is a slight amount of latency: 8-clock cycles for reads (7.5nanoseconds), and 5-clock cycles for writes (4.6nanoseconds). For those keeping track at home: AMD Zen's L3 cache has a latency of 40 clocks: aka 10nanoseconds at 4GHz Basically, QDR-IV BEATS the L3 latency of modern CPUs. And we haven't even begun to talk software or ASIC optimizations yet.
CPU inefficiencies for Cryptonight
Now, if that weren't bad enough... CPUs have a few problems with the Cryptonight algorithm.
AMD Zen and Intel Skylake CPUs transfer from L3 -> L2 -> L1 cache. Each of these transfers are in 64-byte chunks. Cryptonight only uses 16 of these bytes. This means that 75% of L3 cache bandwidth is wasted on 48-bytes that would never be used per inner-loop of Cryptonight. An ASIC would transfer only 16-bytes at a time, instantly increasing the RAM's speed by 4-fold.
AES-NI instructions on Ryzen / Threadripper can only be done one-per-core. This means a 16-core Threadripper can at most perform 16 AES encryptions per clock tick. An ASIC can perform as many as you'd like, up to the speed of the RAM.
CPUs waste a ton of energy: there's L1 and L2 caches which do NOTHING in Cryptonight. There are floating-point units, memory controllers, and more. An ASIC which strips things out to only the bare necessities (basically: AES for Cryptonight core) would be way more power efficient, even at ancient 65nm or 90nm designs.
QDR-IV and RLDRAM3 still have latency involved. Assuming 8-clocks of latency, the naive access pattern would be:
This isn't very efficient: the RAM sits around waiting. Even with "latency reduced" RAM, you can see that the RAM still isn't doing very much. In fact, this is why people thought Cryptonight was safe against ASICs. But what if we instead ran four instances in parallel? That way, there is always data flowing.
Cryptonight #1 Read
Cryptonight #2 Read
Cryptonight #3 Read
Cryptonight #4 Read
Cryptonight #1 Write
Cryptonight #2 Write
Cryptonight #3 Write
Cryptonight #4 Write
Cryptonight #1 Read #2
Cryptonight #2 Read #2
Cryptonight #3 Read #2
Cryptonight #4 Read #2
Cryptonight #1 Write #2
Cryptonight #2 Write #2
Cryptonight #3 Write #2
Cryptonight #4 Write #2
Notice: we're doing 4x the Cryptonight in the same amount of time. Now imagine if the stalls were COMPLETELY gone. DDR4 CANNOT do this. And that's why most people thought ASICs were impossible for Cryptonight. Unfortunately, RLDRAM3 and QDR-IV can accomplish this kind of pipelining. In fact, that's what they were designed for.
As good as QDR-IV RAM is, its way too expensive. RLDRAM3 is almost as fast, but is way more complicated to use and describe. Due to the lower cost of RLDRAM3 however, I'd assume any ASIC for CryptoNight would use RLDRAM3 instead of the simpler QDR-IV. RLDRAM3 32Mbit x36 bits costs $180 at quantities == 1, and would support up to 64-Parallel Cryptonight instances (In contrast, a $800 AMD 1950x Threadripper supports 16 at the best). Such a design would basically operate at the maximum speed of RLDRAM3. In the case of x36-bit bus and 2133MT/s, we're talking about 2133 / (Burst Length4 x 4 read/writes x 524288 inner loop) == 254 Full Cryptonight Hashes per Second. 254 Hashes per second sounds low, and it is. But we're talking about literally a two-chip design here. 1-chip for RAM, 1-chip for the ASIC/AES stuff. Such a design would consume no more than 5 Watts. If you were to replicate the ~5W design 60-times, you'd get 15240 Hash/second at 300 Watts.
Depending on cost calculations, going cheaper and "making more" might be a better idea. RLDRAM2 is widely available at only $32 per chip at 800 MT/s. Such a design would theoretically support 800 / 4x4x524288 == 95 Cryptonight Hashes per second. The scary part: The RLDRAM2 chip there only uses 1W of power. Together, you get 5 Watts again as a reasonable power-estimate. x60 would be 5700 Hashes/second at 300 Watts. Here's Micron's whitepaper on RLDRAM2: https://www.micron.com/~/media/documents/products/technical-note/dram/tn4902.pdf . RLDRAM3 is the same but denser, faster, and more power efficient.
Hybrid Cube Memory
Hybrid Cube Memory is "stacked RAM" designed for low latency. As far as I can tell, Hybrid Cube memory allows an insane amount of parallelism and pipelining. It'd be the future of an ASIC Cryptonight design. The existence of Hybrid Cube Memory is more about "Generation 2" or later. In effect, it demonstrates that future designs can be lower-power and give higher-speed.
The overall board design would be the ASIC, which would be a simple pipelined AES ASIC that talks with RLDRAM3 ($180) or RLDRAM2 ($30). Its hard for me to estimate an ASIC's cost without the right tools or design. But a multi-project wafer like MOSIS offers "cheap" access to 14nm and 22nm nodes. Rumor is that this is roughly $100k per run for ~40 dies, suitable for research-and-development. Mass production would require further investments, but mass production at the ~65nm node is rumored to be in the single-digit $$millions or maybe even just 6-figures or so. So realistically speaking: it'd take ~$10 Million investment + a talented engineer (or team of engineers) who are familiar with RLDRAM3, PCIe 3.0, ASIC design, AES, and Cryptonight to build an ASIC.
Current CPUs waste 75% of L3 bandwidth because they transfer 64-bytes per cache-line, but only use 16-bytes per inner-loop of CryptoNight.
Low-latency RAM exists for only $200 for ~128MB (aka: 64-parallel instances of 2MB Cryptonight). Such RAM has an estimated speed of 254 Hash/second (RLDRAM 3) or 95 Hash/second (Cheaper and older RLDRAM 2)
ASICs are therefore not going to be capital friendly: between the higher costs, the ASIC investment, and the literally millions of dollars needed for mass production, this would be a project that costs a lot more than a CPU per-unit per hash/sec.
HOWEVER, a Cryptonight ASIC seems possible. Furthermore, such a design would be grossly more power-efficient than any CPU. Though the capital investment is high, the rewards of mass-production and scalability are also high. Data-centers are power-limited, so any Cryptonight ASIC would be orders of magnitude lower-power than a CPU / GPU.
EDIT: Greater discussion throughout today has led me to napkin-math an FPGA + RLDRAM3 option. I estimated roughly ~$5000 (+/- 30%, its a very crude estimate) for a machine that performs ~3500 Hashes / second, on an unknown number of Watts (Maybe 75Watts?). $2000 FPGA, $2400 RLDRAM3, $600 on PCBs, misc chips, assembly, etc. etc. A more serious effort may use Hybrid Cube Memory to achieve much higher FPGA-based Hashrates. My current guess is that this is an overestimate on the cost, so -30% if you can achieve some bulk discounts + optimize the hypothetical design and manage to accomplish the design on cheaper hardware.
Crypto and the Latency Arms Race: Crypto Exchanges and the HFT Crowd
News by Coindesk: Max Boonen Carrying on from an earlier post about the evolution of high frequency trading (HFT), how it can harm markets and how crypto exchanges are responding, here we focus on the potential longer-term impact on the crypto ecosystem. First, though, we need to focus on the state of HFT in a broader context.
Conventional markets are adopting anti-latency arbitrage mechanisms
In conventional markets, latency arbitrage has increased toxicity on lit venues and pushed trading volumes over-the-counter or into dark pools. In Europe, dark liquidity has increased in spite of efforts by regulators to clamp down on it. In some markets, regulation has actually contributed to this. Per the SEC:
“Using the Nasdaq market as a proxy, [Regulation] NMS did not seem to succeed in its mission to increase the display of limit orders in the marketplace. We have seen an increase in dark liquidity, smaller trade sizes, similar trading volumes, and a larger number of “small” venues.”
Why is non-lit execution remaining or becoming more successful in spite of its lower transparency? In its 2014 paper, BlackRock came out in favour of dark pools in the context of best execution requirements. It also lamented message congestion and cautioned against increasing tick sizes, features that advantage latency arbitrageurs. (This echoes the comment to CoinDesk of David Weisberger, CEO of Coinroutes, who explained that the tick sizes typical of the crypto market are small and therefore do not put slower traders at much of a disadvantage.) Major venues now recognize that the speed race threatens their business model in some markets, as it pushes those “slow” market makers with risk-absorbing capacity to provide liquidity to the likes of BlackRock off-exchange. Eurex has responded by implementing anti-latency arbitrage (ALA) mechanisms in options: “Right now, a lot of liquidity providers need to invest more into technology in order to protect themselves against other, very fast liquidity providers, than they can invest in their pricing for the end client. The end result of this is a certain imbalance, where we have a few very sophisticated liquidity providers that are very active in the order book and then a lot of liquidity providers that have the ability to provide prices to end clients, but are tending to do so more away from the order book”, commented Jonas Ullmann, Eurex’s head of market functionality. Such views are increasingly supported by academic research. XTX identifies two categories of ALA mechanisms: policy-based and technology-based. Policy-based ALA refers to a venue simply deciding that latency arbitrageurs are not allowed to trade on it. Alternative venues to exchanges (going under various acronyms such as ECN, ATS or MTF) can allow traders to either take or make, but not engage in both activities. Others can purposefully select — and advertise — their mix of market participants, or allow users to trade in separate “rooms” where undesired firms are excluded. The rise of “alternative microstructures” is mostly evidenced in crypto by the surge in electronic OTC trading, where traders can receive better prices than on exchange. Technology-based ALA encompasses delays, random or deterministic, added to an exchange’s matching engine to reduce the viability of latency arbitrage strategies. The classic example is a speed bump where new orders are delayed by a few milliseconds, but the cancellation of existing orders is not. This lets market makers place fresh quotes at the new prevailing market price without being run over by latency arbitrageurs. As a practical example, the London Metal Exchange recently announced an eight-millisecond speed bump on some contracts that are prime candidates for latency arbitrageurs due to their similarity to products trading on the much bigger CME in Chicago. Why 8 milliseconds? First, microwave transmission between Chicago and the US East Coast is 3 milliseconds faster than fibre optic lines. From there, the $250,000 a month Hibernia Express transatlantic cable helps you get to London another 4 milliseconds faster than cheaper alternatives. Add a millisecond for internal latencies such as not using FPGAs and 8 milliseconds is the difference for a liquidity provider between investing tens of millions in speed technology or being priced out of the market by latency arbitrage. With this in mind, let’s consider what the future holds for crypto.
Crypto exchanges must not forget their retail roots
We learn from conventional markets that liquidity benefits from a diverse base of market makers with risk-absorption capacity. Some have claimed that the spread compression witnessed in the bitcoin market since 2017 is due to electronification. Instead, I posit that it is greater risk-absorbing capacity and capital allocation that has improved the liquidity of the bitcoin market, not an increase in speed, as in fact being a fast exchange with colocation such as Gemini has not supported higher volumes. Old-timers will remember Coinsetter, a company that, per the Bitcoin Wiki , “was created in 2012, and operates a bitcoin exchange and ECN. Coinsetter’s CSX trading technology enables millisecond trade execution times and offers one of the fastest API data streams in the industry.” The Wiki page should use the past tense as Coinsetter failed to gain traction, was acquired in 2016 and subsequently closed. Exchanges that invest in scalability and user experience will thrive (BitMEX comes to mind). Crypto exchanges that favour the fastest traders (by reducing jitter, etc.) will find that winner-takes-all latency strategies do not improve liquidity. Furthermore, they risk antagonising the majority of their users, who are naturally suspicious of platforms that sell preferential treatment. It is baffling that the head of Russia for Huobi vaunted to CoinDesk that: “The option [of co-location] allows [selected clients] to make trades 70 to 100 times faster than other users”. The article notes that Huobi doesn’t charge — but of course, not everyone can sign up. Contrast this with one of the most successful exchanges today: Binance. It actively discourages some HFT strategies by tracking metrics such as order-to-trade ratios and temporarily blocking users that breach certain limits. Market experts know that Binance remains extremely relevant to price discovery, irrespective of its focus on a less professional user base. Other exchanges, take heed. Coinbase closed its entire Chicago office where 30 engineers had worked on a faster matching engine, an exercise that is rumoured to have cost $50mm. After much internal debate, I bet that the company finally realised that it wouldn’t recoup its investment and that its value derived from having onboarded 20 million users, not from upgrading systems that are already fast and reliable by the standards of crypto. It is also unsurprising that Kraken’s Steve Hunt, a veteran of low-latency torchbearer Jump Trading, commented to CoinDesk that: “We want all customers regardless of size or scale to have equal access to our marketplace”. Experience speaks. In a recent article on CoinDesk , Matt Trudeau of ErisX points to the lower reliability of cloud-based services compared to dedicated, co-located and cross-connected gateways. That much is true. Web-based technology puts the emphasis on serving the greatest number of users concurrently, not on serving a subset of users deterministically and at the lowest latency possible. That is the point. Crypto might be the only asset class that is accessible directly to end users with a low number of intermediaries, precisely because of the crypto ethos and how the industry evolved. It is cheaper to buy $500 of bitcoin than it is to buy $500 of Microsoft shares. Trudeau further remarks that official, paid-for co-location is better than what he pejoratively calls “unsanctioned colocation,” the fact that crypto traders can place their servers in the same cloud providers as the exchanges. The fairness argument is dubious: anyone with $50 can set up an Amazon AWS account and run next to the major crypto exchanges, whereas cheap co-location starts at $1,000 a month in the real world. No wonder “speed technology revenues” are estimated at $1 billion for the major U.S. equity exchanges. For a crypto exchange, to reside in a financial, non-cloud data centre with state-of-the-art network latencies might ironically impair the likelihood of success. The risk is that such an exchange becomes dominated on the taker side by the handful of players that already own or pay for the fastest communication routes between major financial data centres such as Equinix and the CME in Chicago, where bitcoin futures are traded. This might reduce liquidity on the exchange because a significant proportion of the crypto market’s risk-absorption capacity is coming from crypto-centric funds that do not have the scale to operate low-latency strategies, but might make up the bulk of the liquidity on, say, Binance. Such mom-and-pop liquidity providers might therefore shun an exchange that caters to larger players as a priority.
Exchanges risk losing market share to OTC liquidity providers
While voice trading in crypto has run its course, a major contribution to the market’s increase in liquidity circa 2017–2018 was the risk appetite of the original OTC voice desks such as Cumberland Mining and Circle. Automation really shines in bringing together risk-absorbing capacity tailored to each client (which is impossible on anonymous exchanges) with seamless electronic execution. In contrast, latency-sensitive venues can see liquidity evaporate in periods of stress, as happened to a well-known and otherwise successful exchange on 26 June which saw its bitcoin order book become $1,000 wide for an extended period of time as liquidity providers turned their systems off. The problem is compounded by the general unavailability of credit on cash exchanges, an issue that the OTC market’s settlement model avoids. As the crypto market matures, the business model of today’s major cash exchanges will come under pressure. In the past decade, the FX market has shown that retail traders benefit from better liquidity when they trade through different channels than institutional speculators. Systematic internalizers demonstrate the same in equities. This fact of life will apply to crypto. Exchanges have to pick a side: either cater to retail (or retail-driven intermediaries) or court HFTs. Now that an aggregator like Tagomi runs transaction cost analysis for their clients, it will become plainly obvious to investors with medium-term and long-term horizons (i.e. anyone not looking at the next 2 seconds) that their price impact on exchange is worse than against electronic OTC liquidity providers. Today, exchange fee structures are awkward because they must charge small users a lot to make up for crypto’s exceptionally high compliance and onboarding costs. Onboarding a single, small value user simply does not make sense unless fees are quite elevated. Exchanges end up over-charging large volume traders such as B2C2’s clients, another incentive to switch to OTC execution. In the alternative, what if crypto exchanges focus on HFT traders? In my opinion, the CME is a much better venue for institutional takers as fees are much lower and conventional trading firms will already be connected to it. My hypothesis is that most exchanges will not be able to compete with the CME for fast traders (after all, the CBOE itself gave up), and must cater to their retail user base instead. In a future post, we will explore other microstructures beyond all-to-all exchanges and bilateral OTC trading. Fiber threads image via Shutterstock
Discussions in ProgPoW, Ethash and RandomX resulted in one agreement. Memory-intensity (mainly bus-intensity) can be used to achieve or increase the resistance against ASICs, to bring back mining to the average Joe and re-distribute mining. Meanwhile, a new algorithm called rainforest started being used in new coins such as MicroBitcoin. While the developer of said algorithm seems to be confident that their algorithm is expensive for ASICs and FPGAs to implement, issues have been found in the code, which resulted in (closed-source) GPU miners running at 1000x the original speed and FPGA-Vendors listing this algorithm as one of the coins possible to mine. Using the research done for the rainforest algorithm, a brand new hash called "Squash" has been created. It has similar properties to rainforest, meaning that it still utilizes "expensive" functions, but also speeds very close to blake2 (5.5 to 4 cycles per byte, depending on the architecture). To also have shared properties with Ethash and ProgPoW, a variant called SquashPoW has been designed. It uses the same interior design. This supposedly results in expensive ASICs with low potential gain and more importantly - asymmetry. Asymmetry allows developers or "coins" to force a miner to run on a relatively large scratchpad while a verifier can run on significantly less resources and therefore still inherit the ability to properly validate incoming blocks. More on that in the ethash design rationale. Now, whats new in SquashPoW?
While ProgPoW and Ethash focus on FNV and SHA-3 for dataset generation, SquashPoW uses a CRC32. It already is implemented in the hardware of modern ARMv8 CPUs which means that an ASIC wont be able to use the Light Evaluation Method but also implies higher speeds for ARM CPUs (mobile phones, efficient servers, IOT devices)
ProgPoW and Ethash use FNV and SHA-3 to combine them with many memory-read operations to get the final result of the Hash. SquashPoW uses an entirely new function, which means no ASIC-optimised executed to calculate a hash.
In contrast to RandomX, SquashPoW still allows and endorses GPU miners. Those are a necessity for a healthy ecosystem. GPUs simply have to calculate about 5x more than CPU miners do while having a much faster IO which allows them to have an increased hashrate (until a) HBM4 will replace DDR6 or b) 3D stacked CPUs will be a thing).
In case you are now interested in testing out SquashPoW, I highly recommend checkout out the source code which can be seen at the official GitHub Repository. Please note, SquashPoW is merely a variation of the concepts of Ethash. If you enjoy this hash, please show the original some love. Please also note, that this is merely a post to spread awareness. EDIT: A reference implementation can be found here
A cryptocurrency (or crypto currency) is a digital asset
Main article:Blockchain The validity of each cryptocurrency's coins is provided by a blockchain. A blockchain is a continuously growing list of records), called blocks, which are linked and secured using cryptography. Each block typically contains a hash pointer as a link to a previous block, a timestamp and transaction data. By design, blockchains are inherently resistant to modification of the data. It is "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way". For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority. Blockchains are secure by design and are an example of a distributed computing system with high Byzantine fault tolerance. Decentralized consensus has therefore been achieved with a blockchain. Blockchains solve the double-spendingproblem without the need of a trusted authority or central server), assuming no 51% attack (that has worked against several cryptocurrencies).
Cryptocurrencies use various timestamping schemes to "prove" the validity of transactions added to the blockchain ledger without the need for a trusted third party. The first timestamping scheme invented was the proof-of-work scheme. The most widely used proof-of-work schemes are based on SHA-256 and scrypt. Some other hashing algorithms that are used for proof-of-work include CryptoNight, Blake), SHA-3, and X11#X11). The proof-of-stake is a method of securing a cryptocurrency network and achieving distributed consensus through requesting users to show ownership of a certain amount of currency. It is different from proof-of-work systems that run difficult hashing algorithms to validate electronic transactions. The scheme is largely dependent on the coin, and there's currently no standard form of it. Some cryptocurrencies use a combined proof-of-work/proof-of-stake scheme.
📷Hashcoin mine In cryptocurrency networks, mining is a validation of transactions. For this effort, successful miners obtain new cryptocurrency as a reward. The reward decreases transaction fees by creating a complementary incentive to contribute to the processing power of the network. The rate of generating hashes, which validate any transaction, has been increased by the use of specialized machines such as FPGAs and ASICs running complex hashing algorithms like SHA-256 and Scrypt. This arms race for cheaper-yet-efficient machines has been on since the day the first cryptocurrency, bitcoin, was introduced in 2009. With more people venturing into the world of virtual currency, generating hashes for this validation has become far more complex over the years, with miners having to invest large sums of money on employing multiple high performance ASICs. Thus the value of the currency obtained for finding a hash often does not justify the amount of money spent on setting up the machines, the cooling facilities to overcome the enormous amount of heat they produce, and the electricity required to run them. Some miners pool resources, sharing their processing power over a network to split the reward equally, according to the amount of work they contributed to the probability of finding a block). A "share" is awarded to members of the mining pool who present a valid partial proof-of-work. As of February 2018, the Chinese Government halted trading of virtual currency, banned initial coin offerings and shut down mining. Some Chinese miners have since relocated to Canada. One company is operating data centers for mining operations at Canadian oil and gas field sites, due to low gas prices. In June 2018, Hydro Quebec proposed to the provincial government to allocate 500 MW to crypto companies for mining. According to a February 2018 report from Fortune, Iceland has become a haven for cryptocurrency miners in part because of its cheap electricity. Prices are contained because nearly all of the country's energy comes from renewable sources, prompting more mining companies to consider opening operations in Iceland. In March 2018, a town in Upstate New York put an 18-month moratorium on all cryptocurrency mining in an effort to preserve natural resources and the "character and direction" of the city.
GPU price rise
An increase in cryptocurrency mining increased the demand of graphics cards (GPU) in 2017. Popular favorites of cryptocurrency miners such as Nvidia's GTX 1060 and GTX 1070 graphics cards, as well as AMD's RX 570 and RX 580 GPUs, doubled or tripled in price – or were out of stock. A GTX 1070 Ti which was released at a price of $450 sold for as much as $1100. Another popular card GTX 1060's 6 GB model was released at an MSRP of $250, sold for almost $500. RX 570 and RX 580 cards from AMD were out of stock for almost a year. Miners regularly buy up the entire stock of new GPU's as soon as they are available. Nvidia has asked retailers to do what they can when it comes to selling GPUs to gamers instead of miners. "Gamers come first for Nvidia," said Boris Böhles, PR manager for Nvidia in the German region.
📷An example paper printable bitcoin wallet consisting of one bitcoin address for receiving and the corresponding private key for spendingMain article:Cryptocurrency wallet A cryptocurrency wallet stores the public and private "keys" or "addresses" which can be used to receive or spend the cryptocurrency. With the private key, it is possible to write in the public ledger, effectively spending the associated cryptocurrency. With the public key, it is possible for others to send currency to the wallet.
Bitcoin is pseudonymous rather than anonymous in that the cryptocurrency within a wallet is not tied to people, but rather to one or more specific keys (or "addresses"). Thereby, bitcoin owners are not identifiable, but all transactions are publicly available in the blockchain. Still, cryptocurrency exchanges are often required by law to collect the personal information of their users. Additions such as Zerocoin, Zerocash and CryptoNote have been suggested, which would allow for additional anonymity and fungibility.
[Very long, very serious] Development summary week ending 18th April 2014
When I got my first full time job, I used to try implementing requests from everyone as they came in, and for a while people really loved that I listened to their requests. Over time, however, things started to go wrong. I’d apply a change someone asked for, and in doing so would break something elsewhere in the code, in some subtle way that was missed in short-term testing. I’d fix that second bug and reveal a third. I’d fix that just in time for a new request to come in, and the process repeat. This led to the term “Bug whack-a-mole”, wherein I was spending time mostly fixing bugs introduced to live systems through rushing through earlier bug fixes. So this week, we’ve had a lot of people asking about changes to proof-of-work, especially X11, or even moving to proof of stake, primarily in an attempt to address risk of a 51% attack. A 51% attack is where one actor (person, group, organisation, whatever) gains control of enough resources to be able to create their own blockchain, isolated from the main blockchain, at a rate at least as quickly as the main blockchain is being created. They can then spend Dogecoins on the main blockchain, before releasing their fake blockchain; if their fake blockchain is longer than the existing blockchain, nodes will switch to the new blockchain (as they would when repairing a fork), and essentially the spent Dogecoin on the main blockchain are reversed and can be spent again. This is mostly of consequence to exchanges and payment processors (such as Moolah), who are most likely to end up holding the loss from the double-spend. The concern about a 51% attack stems from a couple of weeks ago now, when Wafflepool was around 50% of the network hashrate (mining power). It’s still high (at the time of wring about 32GH/s out of almost 74GH/s, or about 43%), but it is diminishing as a proportion. Lets talk about proof of stake first, as this one’s simpler. Proof of stake has been suggested as a way of avoiding the risk of Wafflepool having control of too many mining resources by itself, by changing from securing the blockchain through computational resources (work), to using number of Dogecoin held. The theory is that those with most Dogecoins have most to lose, and will act in their own interests. Major examples of proof of stake coins include Peercoin, Mintcoin and more recently Blackcoin. However, this essentially means we take control from Wafflepool, and hand it to Cryptsy (who are considered most likely to be the holder of some of the huge Dogecoin wallets out there). I by no means expect either organisation to attempt a 51% attack, but hopefully it’s clear that simply switching risks isn’t actually improving things. I’ve also had significant concerns raised from the merchant/payment processor community about potential impact of proof of stake, and that it may encourage hoarding (as coins are awarded for holding coins, rather than for mining). The price instability of Mintcoin and Blackcoin (and that Peercoin appears to only avoid this through very high transaction fees to keep the entire network inert) does not encourage confidence, either. For now, proof of stake remains something we’re keeping in mind, primarily in case price does not react as anticipated to mining reward decreases over time, but certainly we’re not eager to rush into such a change. Before I get into a discussion on proof of work, let me summarise this quickly; right now, uncertainty about changes is holding back our community from adopting ASICs. It’s high risk to spend hundreds, thousands or in some cases significantly more on ASIC hardware which could be left useless if we move. Those who have already purchased ASICs to support the Dogecoin hashrate would most likely have to mine Litecoin to recover sunk costs, if we did move. ASICs are virtually inevitable, and in our assessment we are better off pushing for rapid adoption, rather than expending resources delaying a problem which will re-occur later. At the time of writing the development team has no plans to change proof of work algorithm outside of the eventuality of a major security break to Scrypt. We are focusing on mitigation approaches in case of a 51% attack, and adoption of the coin as the most sustainable approaches to dealing with this risk. The X11 algorithm has been proposed as an alternative proof of work algorithm. X11, for those unaware, was introduced with Darkcoin. It’s a combination of 11 different SHA-3 candidate algorithms, using multiple rounds of hashing. The main advantage championed for Darkcoin is that current implementations run cooler on GPU hardware. Beyond that, there’s a lot of confusion over what it does and does not do. As I’m neither an algorithms or electronics specialist, I recruited a colleague who previously worked on the CERN computing grid to assist, and the following is primarily his analysis. A full technical report is coming for anyone who really likes detail, this is just a summary: A lot of people presume X11 is ASIC resistant; it’s not. Candidate algorithms for SHA-3 were assessed on a number of criteria, including simplicity to implement in hardware. All 11 algorithms have been implemented in FPGA hardware, and several in ASIC hardware already. The use of multiple algorithms does significantly complicate ASIC development, as it means the resulting chip would likely be extremely large. This has consequences for production, as the area of a chip is the main determining factor for likelihood of an error in the chip. The short version being that while yes it would take significant resources to make an efficient ASIC for X11, for a long time Scrypt was considered infeasible to adapt to ASICs. As stated earlier, any move would most likely be nothing more than an extremely expensive and risky delaying manoeuvre. ASIC efficiency would also depend heavily on ability to optimise the combination of the algorithms; a naive implementation would run at around the rate of the slowest hashing algorithm, however if any common elements could be found amongst the algorithms, it may be that this could be improved upon significantly There are also significant areas of concern with regards to X11. The “thermal efficiency” is most likely a result of the algorithm being a poor fit for GPU hardware. This means that GPU mining is closer to CPU mining (the X11 Wiki article suggests a ratio of 3:1 for GPU/CPU mining performance), however it also means that if a way of was found to improve performance there could be significantly faster software miners, leading to an ASIC-like edge without any of the hardware development costs. The component algorithms are all relatively new, and several were rejected during the SHA-3 competition for security concerns (see http://csrc.nist.gov/groups/ST/hash/sha-3/Round2/documents/Round2_Report_NISTIR_7764.pdf for full details). Security criteria for SHA-3 algorithms was also focused on ability to generate collisions, rather than on producing hashes with specific criteria (such as number of leading 0s, which is how proof of work is usually assessed). X11 is a fascinating algorithm for new coins, however I would consider it exceptionally high risk for any existing coin to adopt. Beyond algorithm analysis, this week has been mostly about testing 1.7. Last weekend Patrick raised the issue that we had been incorrectly running the automated tests, which had led to several automated test failures being missed earlier. This led to other tasks being dropped as we quickly reworked the tests to match Dogecoin parameters instead of Bitcoin. So far, all tests have passed successfully once updated to match Dogecoin, however this work continues. On the bright side, it turns out we have a lot more automated tests than we realised, which is very useful for later development. The source code repository for Dogecoin now also uses Travis CI, which sanity-checks patches submitted to the project, to help us catch any potential problems earlier, thanks to Tazz for leading the charge on that. This is particularly important as of course we’re developing on different platforms (Windows, OS X, Linux) and what works on one, may not work on others. Over time, this should be a significant time saver for the developers. For anyone wanting to help push Dogecoin forward, right now the most productive thing to be doing is testing either Dogecoin, or helping Bitcoin Core test pull requests. Feel free to drop by our Freenode channel for guidance on getting started with either. Right now, I’m working on the full technical report on X11, and will then be back working on the payment protocol for Dogecoin. I’ve approached a few virus scanning software companies about offering their products for Dogecoin, with so far no response, but will update you all if I hear more. Lastly, the next halvening (mining reward halving) is currently expected late on the 27th or early on the 28th, both times GMT. Given that it was initially expected on the 25th, we’re obviously seeing some slippage in estimates, and a total off the top of my head guess would be that we’ll see it around 0500 GMT on the 28th at this rate. I have taken the 28th off from the day job, and will be around both before and after in case of any problems (love you guys, not getting up at 5am to check on the blockchain, though!)
There's a pretty interesting debate in the AI space right now on whether FPGAs or ASICs are the way to go for hardware-accelerated AI in production. To summarize, it's more about how to operationalize AI - how to use already trained models with millions of parameters to get real-time predictions, like in video analysis or complex time series models based on deep neural networks. Training those AI models still seems to favor GPUs for now. Google seem to be betting big on ASICs with their TPU. On the other hand, Microsoft and Amazon seem to favor FPGAs. In fact Microsoft have recently partnered with Xilinx to add FPGA co-processors on half of their servers (they were previously only using Intel's Altera). The FPGA is the more flexible piece of hardware but it is less efficient than an ASIC, and have been notoriously hard to program against (though things are improving). There's also a nice article out there summarizing the classical FPGA conundrum: they're great for designing and prototyping but as soon as your architecture stabilizes and you're looking to ramp up production, taking the time to do an ASIC will more often be the better investment. So the question (for me) is where AI inference will be in that regard. I'm sure Google's projects are large scale enough that an ASIC makes sense, but not everyone is Google. And there is so much research being done in the AI space right now and everyone's putting out so many promising new ideas that being more flexible might carry an advantage. Google have already put out three versions of their TPUs in the space of two years Which brings me back to Xilinx. They have a promising platform for AI acceleration both in the datacenter and embedded devices which was launched two months ago. If it catches on it's gonna give them a nice boost for the next couple of years. If it doesn't, they still have traditional Industrial, Aerospace & Defense workloads to fall back on... Another wrinkle is their SoCs are being used in crypto mining ASICs like Antminer, so you never know how that demand is gonna go. As the value of BTC continues to sink there is constant demand for more efficient mining hardware, and I do think cryptocurrencies are here to stay. While NVDA has fallen off a cliff recently due to excess GPU inventory, XLNX has kept steady. XLNX TTM P/E is 28.98 Semiconductors - Programmable Logic industry's TTM P/E is 26.48 Thoughts?
During the fork issue last Monday how were miners owning ASICs affected. Do the ASICs run verion 0.7 or 0.8? Or, is the ASIC a side-car computer that does the actual number crunching while the 0.7 or 0.8 still runs on a conventional server. Does an ASIC even know the size of a block? (I would think so.) I didn't see any discussion of this. A bit worried this was too simple of a question but this is the perfect place to ask. Will pay/tip bounty 0.1 BTC to answer of my choosing. (Give me 12 hours to tip.)
XMR-Stak - proudly XMR-only mining network stack (and CPU miner)
I want to show off what I was working on for the past 7 weeks or so. Just to clarify (there seems to be a lot of "give me money" posts around here recently), it will be FOSS. This is not some kind of crowd funding attempt. Of course the purpose of this topic is to gage interest - I want to be sure that it is worth my time to polish up "own-use grade" into release grade software, so if you like what you see please upvote and make a noise.
What do you mean by a network stack? What's wrong with the current one?
Network stack is essentially all the logic that lives between the hashing code and the output to the pool. While the software that I'm writing currently has a CPU miner on top, there is no reason why it can't be modified to hash through GPU. Current stack used by the open source CPU miner and some GPU miners has been knocking around since 2011. Its design is less than ideal - command line args put a limit on how complex the configuration can get, and the flawed network interaction design means that it needs to keep talking to the pool (keep-alive) to detect that it is still there. Most importantly though, the code was designed for Bitcoin. Cryptonight coins have hashing speeds many orders of magnitude slower, which leads to different design choices. For example both BTC and XMR have 32 bit nonce. That means you have slightly over 4 billion attempts to find a block and you need to add fudge code in BTC that is not needed in XMR.
CPU mining performance
I started off with Wolf's hashing code, but by the time I was done there are only a couple lines of code that are similar. Performance is nearly identical to the closed source paid miners. Here are some numbers:
I7-2600K - 266 H/s
I7-6700 - 276 H/s (with a separate GPU miner)
Dual X5650 - 466 H/s (depends on NUMA)
Dual E5640 - 365 H/s (same as above)
One of the most annoying things for me about the old mining stack was that it kept spewing huge amounts of redundant information. XMR-Stak prints reports when you request it to do so instead. Here they are (taken from the X5650 system running on Arch).
This is a bit of an academic exercise, showing why I don't believe that memory latency is be-all and end-all of PoW. Idea is very simple. We do two hashes at a time, we double the performance (as we have more time to load data from L3). We are of course still constrained by the L3 cache, but FPGAs with 50-100MB of on-chip memory are out already.
QuarkChain Testnet 1.0 was built based on standardized blockchain system requirements, which included network, wallet, browser, and virtual machine functionalities. Other than the fact that the token was a test currency, the environment was completely compatible with the main network. By enhancing the communication efficiency and security of the network, Testnet 2.0 further improves the openness of the network. In addition, Testnet 2.0 will allow community members (other than citizens or residents of the United States) to contribute directly to the network, i.e. running a full node and mining, and receive testnet tokens as rewards. QuarkChain Testnet 2.0 will support multiple mining algorithms, including two typical algorithms: Ethash and Double SHA256, as well as QuarkChain’s unique algorithm called Qkchash – a customized ASIC-resistant, CPU mining algorithm, exclusively developed by QuarkChain. Mining is available both on the root chain and on shards due to QuarkChain’s two-layered blockchain structure. Miners can flexibly choose to mine on the root chain with higher computing power requirements or on shards based on their own computing power levels. Our Goal By allowing community members to participate in mining on Testnet 2.0, our goal is to enhance QuarkChain’s community consensus, encourage community members to participate in testing and building the QuarkChain network, and gain first-hand experience of QuarkChain’s high flexibility and usability. During this time, we hope that the community can develop a better understanding about our mining algorithms, sharding technologies, and governance structures, etc. Furthermore, this will be a more thorough challenge to QuarkChain’s design before the launch of mainnet! Thus, we sincerely invite you to join the Testnet 2.0 mining event and build QuarkChain’s infrastructure together! Today, we’re pleased to announce that we are officially providing the CPU mining demo to the public (other than citizens and residents of the United States)! Everyone can participate in our mining event, and earn tQKC, which can be exchanged to real rewards by non-U.S. persons after the launch of our mainnet. Also, we expect to upgrade our testnet over time, and expect to allow GPU mining for Ethash, and ASIC mining for Double SHA256 in the future. In addition, in the near future, a mining pool that is compatible with all mining algorithms of QuarkChain is also expected to be supported. We hope all the community members can join in with us, and work together to complete this milestone! 2 Introduction to Mining Algorithms 2.1 What is mining？ Mining is the process of generating the new blocks, in which the records of current transactions are added to the record of past transactions. Miners use software that contribute their mining power to participate in the maintenance of a blockchain. In return, they obtain a certain amount of QKC per block, which is called coinbase reward. Like many other blockchain technologies, QuarkChain adopts the most widely used Proof of Work (PoW) consensus algorithm to secure the network. A cryptographically-secure PoW is a costly and time-consuming process which is difficult to solve due to computation-intensity or memory intensity but easy for others to verify. For a block to be valid it must satisfy certain requirements and hash to a value less than the current target threshold. Reverting a block requires recreating all successor blocks and redoing the work they contain, which is costly. By running a cluster, everyone can become a miner and participate in the mining process. The mining rewards are proportional to the number of blocks mined by each individual. 2.2 Introduction to QuarkChain Algorithms and Mining setup According to QuarkChain’s two-layered blockchain structure and Boson consensus, different shards can apply different consensus and mining algorithms. As part of the Boson consensus, each shard can adjust the difficulty dynamically to increase or decrease the hash power of each shard chain. In order to fully test QuarkChain testnet 2.0, we adopt three different types of mining algorithms” Ethash, Double SHA256, and Qkchash, which is ASIC resistant and exclusively developed by QuarkChain founder Qi Zhou. These first two hash algorithms correspond to the mining algorithms dominantly conducted on the graphics processing unit (GPU) and application-specific integrated circuits (ASIC), respectively. I. Ethash Ethash is the PoW mining algorithm for Ethereum. It is the latest version of earlier Dagger-Hashimoto. Ethash is memory intensive, which makes it require large amounts of memory space in the process of mining. The efficiency of mining is basically independent of the CPU, but directly related to memory size and bandwidth. Therefore, by design, building Ethash ASIC is relatively difficult. Currently, the Ethash mining is dominantly conducted on the GPU machines. Read more about Ethash: https://github.com/ethereum/wiki/wiki/Ethash II. Double SHA256 Double SHA256 is the PoW mining algorithms for Bitcoin. It is computational intensive hash algorithm, which uses two SHA256 iterations for the block header. If the hash result is less than the specific target, the mining is successful. ASIC machine has been developed by Bitmain to find more hashes with less electrical power usage. Read more about Double SHA256: https://en.bitcoin.it/wiki/Block_hashing_algorithm III. Qkchash Originally, Bitcoin mining was conducted on the CPU of individual computers, with more cores and greater speed resulting in more profitability. After that, the mining process became dominated by GPU machines, then field-programmable gate arrays (FPGA) and finally ASIC, in a race to achieve more hash rates with less electrical power usage. Due to this arms race, it has become increasingly harder for prospective new miners to join. This raises centralization concerns because the manufacturers of the high-performance ASIC are concentrated in a small few. To solve this, after extensive research and development, QuarkChain founder Dr. Qi Zhou has developed mining algorithm — Qkchash, that is expected to be ASIC-resistant. The idea is motivated by the famous date structure orders-statistic tree. Based on this data structure, Qkchash requires to perform multiple search, insert, and delete operations in the tree, which tries to break the ASIC pipeline and makes the code execution path to be data-dependent and unpredictable besides random memory-access patterns. Thus, the mining efficiency is closely related to the CPU, which ensures the security of Boston consensus and encourges the mining decentralization. Please refer to Dr. Qi’s paper for more details: https://medium.com/quarkchain-official/order-statistics-based-hash-algorithm-e40f108563c4 2.3 Testnet 2.0 mining configuration Numbers of Shards: 8 Cluster: According to the real-time online mining node The corresponding mining algorithm is Read more about Ethash with Guardian: https://github.com/QuarkChain/pyquarkchain/wiki/Ethash-with-Guardian) We will provide cluster software and the demo implementation of CPU mining to the public. Miners are able to arbitrarily select one shard or multiple shards to mine according to the mining difficulty and rewards of different shards. GPU / ASIC mining is allowed if the public manages to get it working with the current testnet. With the upgrade of our testnet, we will further provide the corresponding GPU / ASIC software. QuarkChain’s two-layered blockchain structure, new P2P mode, and Boson consensus algorithm are expected tobe fully tested and verified in the QuarkChain testnet 2.0. 3 Mining Guidance In order to encourage all community members to participate in QuarkChain Testnet 2.0 mining event, we have prepared three mining guidances for community members of different backgrounds. Today we are releasing the Docker Mining Tutorial first. This tutorial provides a command line configuration guide for developers and a docker image for multiple platforms, including a concise introduction of nodes and mining settings. Follow the instructions here: Quick Start with QuarkChain Mining. Next we will continue to release: A tutorial for community members who don’t have programming background. In this tutorial, we will teach how to create private QuarkChain nodes using AWS, and how to mine QKC step by step. This tutorial is expected to be released in the next few days. Programs and APIs integrated with GPU / ASIC mining. This is expected to allow existing miners to switch to QKC mining more seamlessly. Frequently Asked Questions: 1. Can I use my laptop or personal computer to mine? Yes, we will provide cluster software and the demo implementation of CPU mining to the public. Miners will be able to arbitrarily select one shard or multiple shards to mine according to the work difficulty and rewards of different shards. 2. What is the minimum requirements for my laptop or personal computer to mine? Please prepare a Linux or MacOs machine with public IP address or port forwarding set up. 3. Can I mine with my GPU or an ASIC machine? For now, we will only be providing the demo implementation of CPU mining as our first step. Interested miners/developers can rewrite the corresponding GPU / ASIC mining program, according to the JSON RPC API we provided. With the upgrade of our testnet, we expect to provide the corresponding GPU / ASIC interface at a later date. 4. What is the difference among the different mining algorithms? Which one should I choose? Double SHA256 is a computational intensive algorithm, but Ethash and Qkchash are memory intensive algorithms, which have certain requirements on the computer’s memory. Since currently we only support CPU mining, the mining efficiency entirely depends on the cores and speed of CPU. 5. For testnet mining, what else should I know? First, the mining process will occupy a computer’s memory. Thus, it is recommended to use an idle computer for mining. In Testnet 2.0 settings, the target block time of root chain is 60 seconds, and the target block time of shard chain is 10 seconds. The mining is a completely random process, which will take some time and consume a certain amount of electricity. 6. What are the risks of testnet mining? Currently our testnet is still under the development stage and may not be 100% stable. Thus, there would be some risks for QuarkChain main chain forks in testnet, software upgrades and system reboots. These may cause your tQKC or block record to be lost despite our best efforts to ensure the stability and security of the testnet. For more technical questions, welcome to join our developer community on Discard: https://discord.me/quarkchain. 4 Reward Mechanism Testnet 2.0 and all rewards described herein, including mining, are not being offered and will not be available to any citizens or residents of the United States and certain other jurisdictions. All rewards will only be payable following the mainnet launch of QuarkChain. In order to claim or receive any of the following rewards after mainnet launch, you will be required to provide certain identifying documentation and information about yourself. Failure to provide such information or demonstrate compliance with the restrictions herein may result in forfeiture of all rewards, prohibition from participating in future QuarkChain programs, and other sanctions. NO U.S. PERSONS MAY PARTICIPATE IN TESTNET 2.0 AND QUARKCHAIN WILL STRICTLY ENFORCE THIS VIA OUR KYC PROCEDURES. IF YOU ARE A CITIZEN OR RESIDENT OF THE UNITED STATES, DO NOT PARTICIPATE IN TESTNET 2.0. YOU WILL NOT RECEIVE ANY REWARDS FOR YOUR PARTICIPATION. 4.1 Mining Rewards
Prize Pool A total of 5 million QKC prize pool have been reserved to motivate all miners to participate in the testnet 2.0 mining event. According to the different mining algorithms, the prize pool is allocated as follows:
Total Prize Pool: 5,000,000 QKC Prize Pool for Ethash Algorithm: 2,000,000 QKC Prize Pool for Double SHA256 Algorithm: 1,000,000 QKC Prize Pool for Qkchash Algorithm: 2,000,000 QKC The number of QKC each miner is eligible to receive upon mainnet launch will be calculated on a pro rata basis for each mining algorithm set forth above, based on the ratio of sharded block mined by each miner to the total number of sharded block mined by all miners employing such mining algorithm in Testnet 2.0.
Early-bird Rewards To encourage more people to participate early, we will provide early bird rewards. Miners who participate in the first month (December 2018, PST) will enjoy double points. This additional point reward will be ended on December 31, 2018, 11:59pm (PST).
4.2 Bonus for Bug Submission: If you find any bugs for QuarkChain testnet, please feel free to create an issue on our Github page: https://github.com/QuarkChain/pyquarkchain/issues, or send us an email to [email protected]. We may provide related rewards based on the importance and difficulty of the bugs. 4.3 Reward Rules: QuarkChain reserves the right to review the qualifications of the participants in this event. If any cheating behaviors were to be found, the participant will be immediately disqualified from any rewards. QuarkChain further reserves the right to update the rules of the event, to stop the event/network, or to restart the event/network in its sole discretion, including the right to interpret any rules, terms or conditions. For the latest information, please visit our official website or follow us on Telegram/Twitter. About QuarkChain QuarkChain is a flexible, scalable, and user-oriented blockchain infrastructure by applying blockchain sharding technology. It is one of the first public chains that successfully implemented state sharding technology for blockchain in the world. QuarkChain aims to deliver 100,000+ on-chain TPS. Currently, 14,000+ peak TPS has already been achieved by an early stage testnet. QuarkChain already has over 50 partners in its ecosystem. With flexibility, scalability, and usability, QuarkChain is enabling EVERYONE to enjoy blockchain technology at ANYTIME and ANYWHERE. Testnet 2.0 and all rewards described herein are not being and will not be offered in the United States or to any U.S. persons (as defined in Regulation S promulgated under the U.S. Securities Act of 1933, as amended) or any citizens or residents of countries subject to sanctions including the Balkans, Belarus, Burma, Cote D’Ivoire, Cuba, Democratic Republic of Congo, Iran, Iraq, Liberia, North Korea, Sudan, Syria, Zimbabwe, Central African Republic, Crimea, Lebanon, Libya, Somalia, South Suda, Venezuela and Yemen. QuarkChain reserves the right to terminate, suspend or prohibit participation of any user in Testnet 2.0 at any time. In order to claim or receive any rewards, including mining rewards, you will be required to provide certain identifying documentation and information. Failure to provide such information or demonstrate compliance with the restrictions herein may result in termination of your participation, forfeiture of all rewards, prohibition from participating in future QuarkChain programs, and other actions. This announcement is provided for informational purposes only and does not guarantee anyone a right to participate in or receive any rewards in connection with Testnet 2.0. Note: The use of Testnet 2.0 is subject to our terms and conditions available at: https://quarkchain.io/testnet-2-0-terms-and-conditions/ more about qurakchain: Website: https://quarkchain.io/cn/ Facebook: https://www.facebook.com/quarkchainofficial/ Twitter: https://twitter.com/Quark_Chain Telegram: https://t.me/quarkchainio
Will I earn money by mining? - An answer to all newcomers
When people start their adventure with Bitcoin, they often go through a small gold fever with the concept of mining (I would know, that's how I started ;) ). Here is a small guide to answer your eternal question "will I make money with it?": First of all, lets talk about hardware (click on the link for a long and useful list). You won't make money mining bitcoins unless you either have a really high-end GPU from ATI, an FPGA or an ASIC. That's the short answer. Having a decent CPU can be used for Litecoin mining, which can be a small income in itself, but we are here to talk about Bitcoin. To see whether you will earn any money, you need to input a few pieces of data into a special calculator:
cost of your hardware (cost of buying an ASIC, GPUs, motherboards, power supplies, etc.)
how fast can it hash (mega hashes per second). This you can get from your hardware list
your cost of electricity (check with your power company)
And then there are two magical variables that will either make it all work out, or be doomed for failure: * difficulty - it is automatically filled in by the calculator, but for long-term mining (more than a few weeks), you want to be a pessimist. Multiply the value by 10 for predictions over a few months or 100 for a year or two (it will rise steeply soon) * bitcoin price - also filled by the calculator - it might go up or down in the future, affecting your bottom line. It will probably increase in the long run, but lets be pessimistic and lower that to $10-$20 to make sure we are earning money no matter what Having all your hard data and your guesses on the last two variables, you put it all into the mining calculator and see what you get. You will get your earnings in BTC and dollars, as well as summary of your costs and when you will brake even, and what will your net income be over your investment period. Most likely you won't be earning money with Bitcoin mining, and that's okay - mining has become a very specialised process. If you want to invest money into new ASICs, you might be able to turn a tidy profit. TLDR: Use this to check everything. ASICs may earn you money, GPUs won't anymore.
https://preview.redd.it/5r9soz2ltq421.jpg?width=268&format=pjpg&auto=webp&s=6a89685f735b53ec1573eefe08c8646970de8124 What is Bitcoin? Bitcoin is an experimental system of transfer and verification of property based on a network of peer to peer without any central authority. The initial application and the main innovation of the Bitcoin network is a system of digital currency decentralized unit of account is bitcoin. Bitcoin works with software and a protocol that allows participants to issue bitcoins and manage transactions in a collective and automatic way. As a free Protocol (open source), it also allows interoperability of software and services that use it. As a currency bitcoin is both a medium of payment and a store of value. Bitcoin is designed to self-regulate. The limited inflation of the Bitcoin system is distributed homogeneously by computing the network power, and will be limited to 21 million divisible units up to the eighth decimal place. The functioning of the Exchange is secured by a general organization that everyone can examine, because everything is public: the basic protocols, cryptographic algorithms, programs making them operational, the data of accounts and discussions of the developers. The possession of bitcoins is materialized by a sequence of numbers and letters that make up a virtual key allowing the expenditure of bitcoins associated with him on the registry. A person may hold several key compiled in a 'Bitcoin Wallet ', 'Keychain' web, software or hardware which allows access to the network in order to make transactions. Key to check the balance in bitcoins and public keys to receive payments. It contains also (often encrypted way) the private key associated with the public key. These private keys must remain secret, because their owner can spend bitcoins associated with them on the register. All support (keyrings) agrees to maintain the sequence of symbols constituting your keychain: paper, USB, memory stick, etc. With appropriate software, you can manage your assets on your computer or your phone. Bitcoin on an account, to either a holder of bitcoins in has given you, for example in Exchange for property, either go through an Exchange platform that converts conventional currencies in bitcoins, is earned by participating in the operations of collective control of the currency. The sources of Bitcoin codes have been released under an open source license MIT which allows to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the software, subject to insert a copyright notice into all copies. Bitcoin creator, Satoshi Nakamoto What is the Mining of bitcoin? Technical details : During mining, your computer performs cryptographic hashes (two successive SHA256) on what is called a header block. For each new hash, mining software uses a different random number that called Nuncio. According to the content of the block and the nonce value typically used to express the current target. This number is called the difficulty of mining. The difficulty of mining is calculated by comparing how much it is difficult to generate a block compared to the first created block. This means that a difficulty of 70000 is 70000 times more effort that it took to Satoshi Nakamoto to generate the first block. Where mining was much slower and poorly optimized. The difficulty changes each 2016 blocks. The network tries to assign the difficulty in such a way that global computing power takes exactly 14 days to generate 2016 blocks. That's why the difficulty increases along with the power of the network. Material : In the beginning, mining with a processor (CPU) was the only way to undermine bitcoins. (GPU) graphics cards have possibly replaced the CPU due to their nature, which allowed an increase between 50 x to 100 x in computing power by using less electricity by megahash compared to a CPU. Although any modern GPU can be used to make the mining, the brand AMD GPU architecture has proved to be far superior to nVidia to undermine bitcoins and the ATI Radeon HD 5870 card was the most economical for a time. For a more complete list of graphics cards and their performance, see Wiki Bitcoin: comparison of mining equipment In the same way that transition CPU to GPU, the world of mining has evolved into the use of the Field Programmable Gate Arrays (FPGA) as a mining platform. Although FPGAs did not offer an increase of 50 x to 100 x speed of calculation as the transition from CPU to GPU, they offered a better energy efficiency. A typical HD/s 600 graphics card consumes about 400w of power, while a typical FPGA device can offer a rate of hash of 826 MH/s to 80w of power consumption, a gain of 5 x more calculations for the same energy power. Since energy efficiency is a key factor in the profitability of mining, it was an important step for the GPU to FPGA migration for many people. The world of the mining of bitcoin is now migrating to the Application Specific Integrated Circuit (ASIC). An ASIC is a chip designed specifically to accomplish a single task. Unlike FPGAs, an ASIC is unable to be reprogrammed for other tasks. An ASIC designed to undermine bitcoins cannot and will not do anything else than to undermine bitcoins. The stiffness of an ASIC allows us to offer an increase of 100 x computing power while reducing power consumption compared to all other technologies. For example, a classic device to offer 60 GH/s (1 hashes equals 1000 Megahash. 1GH/s = 1000 Mh/s) while consuming 60w of electricity. Compared to the GPU, it is an increase in computing power of 100 x and a reduction of power consumption by a factor of 7. Unlike the generations of technologies that have preceded the ASIC, ASIC is the "end of the line" when we talk about important technology change. The CPUs have been replaced by the GPUs, themselves replaced by FPGAs that were replaced by ASICs. There is nothing that can replace the ASICs now or in the immediate future. There will be technological refinements in ASIC products, and improvements in energy efficiency, but nothing that may match increased from 50 x to 100 x the computing power or a 7 x reduction in power consumption compared with the previous technology. Which means that the energy efficiency of an ASIC device is the only important factor of all product ASIC, since the estimated lifetime of an ASIC device is superior to the entire history of the mining of bitcoin. It is conceivable that a purchased ASIC device today is still in operation in two years if the unit still offers a profitable enough economic to keep power consumption. The profitability of mining is also determined by the value of bitcoin but in all cases, more a device has a good energy efficiency, it is profitable. Software : There are two ways to make mining: by yourself or as part of a team (a pool). If you are mining for yourself, you must install the Bitcoin software and configure it to JSON-RPC (see: run Bitcoin). The other option is to join a pool. There are multiple available pools. With a pool, the profit generated by any block generated by a member of the team is split between all members of the team. The advantage of joining a team is to increase the frequency and stability of earnings (this is called reduce the variance) but gains will be lower. In the end, you will earn the same amount with the two approaches. Undermine solo allows you to receive earnings huge but very infrequent, while miner with a pool can offer you small stable and steady gains. Once you have your software configured or that you have joined a pool, the next step is to configure the mining software. The software the most populare for ASIC/FPGA/GPU currently is CGminer or a derivative designed specifically for FPGAS and ASICs, BFGMiner. If you want a quick overview of mining without install any software, try Bitcoin Plus, a Bitcoin minor running in your browser with your CPU. It is not profitable to make serious mining, but it is a good demonstration of the principle of the mining team.
FAQ#1: Can I mine Bitcoins using this old thing I found in my closet?
If you're wondering whether or not it is feasible to mine Bitcoin using your CPU, GPU (video card), cell phone, ps3, xbox, or original 1985 NES. The true answer to that question if you are being pedantic is: Yes, you can! The time required to get setup initially and the electrical efficiency of the hardware in question usually deduces the same answer from economically minded people, however. Which is: No! Bitcoin mining is a highly competitive technological arms race, and you should never bring a knife to a gun fight. The only two types of Bitcoin miners you want to be looking at right now are:
ASIC (Better option, if you can get one)
FPGA (Always available, not the best ROI at the moment however)
Feel free to kindly link any newbies asking a similar question to this post and downvote their original one, you can find this post stickied in the sidebar.
Last weekend, I watched as the price of Bitcoins skyrocket from a few hours worth of work to an entire week's paycheck. This caused a giddy mania in me that I wanted in on this action, and I wanted to secure the price low so I can sell high. Maybe I can get in on the mining ordeal and put my Computer Science degree to good use by investing several thousand dollars into an FPGA/ASIC. However, there is a major risk in my desire. Ultimately, I want to sell the Bitcoin for a liquid cash of high viscosity. I am not aware of many companies that accept Bitcoin, and those that do accept Bitcoin offer products and services that do not interest me. Ultimately, if I buy a Bitcoin, I will want to use it to pay my bills. In order to sell it and actually make money, I need the price to stay the same or continue rising. My fear is that every other person on the Internet who is causing this surge in demand has the same goal: buy low, sell high. If they buy for $2000, and sell for $20,000, they might be that much closer to paying off their car, covering this month's utility bill, eating a fine dinner after a long track of ramen noodles and water, or possibly move away from their apartment into their very own house. However, with that $2000 stored in a currency that cannot be used for these activities without conversion, the consequence of a devalue will burn that individual away from ever storing their money in Bitcoin. If that $2000 was going towards these activities already and I decide to invest it, I might end up with nothing in return. I fear the time I decide to sell, so will everybody else, and the demand for Bitcoin will plummit and cause a crash in the market similar to Tulip mania. Until I can pay my rent/buy my groceries/fill up my gas tank in Bitcoin, I will not participate in this market.
Cryptography and the future of single-crypto coins verse multi-crypto coins
BitCoin, currently the world's largest decentralized digital currency, has made headlines throughout the world. With market values of over $1,000 per coin, it has become a valuable commodity to invest in. Many individuals, though, do not understand basic underpinnings of Bitcoin, and all altcoins (a term used to describe other digital currencies) as it applies to cryptography. BitCoin is based on a cryptographic hash function called SHA-256, which is a subset of SHA-2. SHA-2 was created by the US National Security Agency (NSA) in 2001. Currently, it is one of the most secure and most widely used cryptographic functions in the world. While SHA-2 has proven it's strength over the years, it's not without it's weaknesses. Like most cryptoalgorithms, it is susceptible to birthday attacks, collisions, and man-in-the-middle attacks. While SHA-2 is still sufficient despite these possible weaknesses, it is difficult to say what the future holds for SHA-2. If the basic concept of a currency can be seen to be undermined at any point in the future, how can said currency maintain long-term value? Enter Quark (QRK). Quark is a distributed, non-centralized currency much like BitCoin, but several key differences. This article will focus solely on the cryptographic changes. Quark employes not one cryptographic function, but a combination of six functions: blake, bmw, grøstl, jh, keccak, skein. These are the six finalists of the NIST hash function competition which ended in October 2012.  While I won't go into specific detail on each cryptographic algorithm here, I'll explain why these multiple algorithms help Quark in the long-run. As previously mentioned BitCoin (and other digital currencies) are based on one cryptographic algorithm, leaving them open to possible attacks in the future. While BitCoin users have discussed the possibility of shifting cryptos in the future , it is not a guaranteed option, and could cause instability with the BitCoin value. With the market booming with ASIC and FPGA mining hardware, there are more opportunities for individuals to turn their hardware power into attacking SHA-2 mechanisms to simply destroy the BitCoin value. With multi-layer multi-hash algorithms such as the ones Quark employes, it makes it extremely difficult to break down the entire structure. Even if a weakness is found in one – or even more – of the cryptographic algorithms that Quark employes, it doesn't destroy the entire structure, due to the “avalanche” affect of cryptoalgorithms. Multi-tiered cryptoalgorithms provide the optimal basis for a currency to stand the test of time. Not only does it provide much more durability and security than single-hash functions, it extremely limits the abilities of ASIC miners to disrupt the mining market, making mining viable for “entry-level” and “mid-level” miners to still make it worth their time and computing power. Resources:  http://en.wikipedia.org/wiki/SHA-2  http://en.wikipedia.org/wiki/NIST_hash_function_competition  https://bitcointalk.org/index.php?topic=191.msg1585#msg1585 EDIT: 11:31 CST, 12/16/13 ... Fixed known typos.
An opinion on how Bitcoin is good for gaming [xpost r/truegaming]
A few days ago, I was a bit bored (read: procrastinating), and I ended up reading about Bitcoin. For those of you unfamiliar with the topic, the gist is this: Bitcoin is a decentralized currency based entirely in cyberspace which is derived based on cryptographic hashes. The value is based on whatever value the market as a collective assigns to it, much like most modern currencies (USD, Euro, etc.). Transactions are enacted entirely over p2p systems, so it has useful applications ranging from legal online marketplace deals, to gray-area deals such as Wikileaks donations, to illicit activities on Silk Road, much like real world cash. New bitcoin units are 'mined' by using a computer to crack the encrypted provided by the p2p system. Summarized from wikipedia and the official site What applies to gaming is the mining of new coins. Originally users would let their cpu mine out the encryption blocks, but as the difficulty of mining increased, the users had to pool resources and work together. Eventually, someone realized that more efficient mining could be done using a computer's gpu as opposed to the cpu. The reason this was a big step has to do with the gpu's ability to much more effectively crunch the numbers of the encrypted block (for more information read the informative wiki page on it). As more users switched to this method, the difficulty increased to balance the new abilities. Avarice finds a way though, the more devoted users are employing devoted 'mining rigs' to increase profits. Some entrepreneurial souls have started producing and marketing advanced rigs for extreme mining (e.g. ztex and butterflylabs ). This may not be good for the Bitcoin economy as a whole though, because it generates an upper echelon of powerusers who can afford the economic arms-race, and it may not be ultimately sustainable. BUT, this would be good for gaming because if the arms race continues we could see accelerated the development of gpu technology. This is all conjecture of course, but in my opinion, a Bitcoin bubble may be good for the community as a whole. Conflict of Interest note: I am not affiliated with Bitcoin and do not personally invest in this currency. Comment from thread:
You wrongly assume that bitcoins are even a factor in pushing hardware manufacturers to improve. It's not even a drop in the bucket compared to all other more relevant types of software that make use of graphics cards. -FromMars
My view was that this niche allows smaller companies to come in and compete with the larger manufacturers (AMD/Nvidia) and bring new ideas to the table for efficient processing. I concede that high end applications will probably drive the gpu industry, but I posit that there is a chance for outside development ideas. -wessubba
Can someone explain to me the advantage of FPGAs over Mining Rigs?
I was on the bitcoin IRC and people were talking about FPGAs, and this was mentioned as a good one. It costs $580 and gets 300 MHs/second. For $587, you could get a mining rig that creates 630 MHs/second, according to bitcoin.org. So what's the big deal with FPGAs for mining?
How does solo bitcoin mining connect with the larger bitcoin network?
I think I have a basic understanding of how bitcoin mining works (on a small-time scale at least). If my understanding is correct, a solo user can basically just run a hashing algorithm on their PC (or FPGA or ASIC device or whatever) and hope to hit a hash that's within the specified target range. What I'm confused about is what happens afterwards. So you hit this hash, you've got the input that resulted in that particular hash on hand - how do you grant yourself 25 bitcoins in such a way that others recognize that you are in posession of those bitcoins? I read up on the block-chain and whatnot but I'm still confused. Do you simply download the latest copy of the block chain (wherever that is) and append a statement saying you hit this hash and now have this many more bitcoins? It's all a touch muddled in my head - any clarification would be awesome
Bitcoin mining evolves from GPU to FPGA and now custom ASIC
Bitcoin mining used to be rows of consumer gpu's brute-forcing the sha-256 encryption (max 2000 Mhash/s). After that people and companies started using expensive fpga setups to do it faster (100-25000 Mhash/s). Now companies have developed custom ASIC for bitcoin mining. (Avalon , Bitforce , Deepbit). It has an impressive speed of 4.5K -> 1500K Mhash/s . This increase of computing ability will probably put the gpu mining out of business and shake up the market abit. Some numbers: https://en.bitcoin.it/wiki/Mining_hardware_comparison
Asic mining. An application-specific integrated circuit, or ASIC, is a microchip designed and manufactured for a very specific purpose. ASICs designed for Bitcoin mining were first released in 2013. For the amount of power they consume, they are vastly more energy efficient than predecessor approaches to mining - using CPU, GPU or FPGA. Pooled ... Field programmable gate array (FPGA) is an integrated circuit designed to be configured by the customer or designer after manufacturing—hence "field-programmable".FPGAs are integrated circuits that can be tailored to suit a particular task like mining bitcoins, after their manufacturing thus creating ASIC. Bitcoin mining is a transaction record process with bitcoins to blockchain – the public database of all the operations with Bitcoin, which is responsible for the transaction confirmation. Network nodes use blockchain to differ the real transactions from the attempt to spend the same facilities twice. The main mining objective is reaching a consensus between network nodes on which ... In 2018 mining farms have started observing that FPGAs are getting back in the game when we talk about mining.. Initially, the only solutions available on the market have been the Xilinx VCU 1525 (produced in low scale), followed by BCU 1525, produced instead on a medium scale.. In addition to these two types of FPGAs, many other manufacturers such as HUAWEI, INTEL and even XILINX have ... A miner that makes use of a compatible FPGA Board. The miner works either in a mining pool or solo.. This is the first open source FPGA Bitcoin miner. It was released on May 20, 2011.
FPGA Mining Is Back! Crushes GPU Mining with $20-57 a Day ...
Do you guys think FGPA's will one day take over GPU Mining? If certain coins do not change algorithms, we will see the dominance of FPGA. Today we take a look at a few coins that potentially have ... Unboxing of a Cairnsmore1 Quad XC6SLX150 Board developed and built by enterpoint/UK. Features FPGAs Quad version - 4 x Spartan(TM)-6 XC6SLX150 FPGAs wired like 2 sets of Icarus board (to be ... Bitcoin Mining with FPGAs (EC551 Final Project) - Duration: 6:11. Advanced Digital Design with Verilog and FPGAs - Boston University 5,295 views Bitmain Technologies Bitcoin Mining: Powered by Xilinx - Duration: 3:10. XilinxInc 6,810 views. ... FPGA Mining Is Back! Crushes GPU Mining with $20-57 a Day per Card - Duration: 12:08. #Bitcoin #FPGA #AtomMiner. Loading... Hide chat Show chat. Autoplay When autoplay is enabled, ... Liquid Cooled Bitcoin Mining in 2019 - Duration: 2:42. WhatsMinerMicroBT 3,330 views.