Reddit User Account Overview


Redditor Since October 26, 2010 (3,576 days old)
Karma Posts: 5,028 Comments: 36,315 Combined: 41,343
Active in

Between Feb 1st and Feb 23rd, it had taken at least 1 hour between blocks 59 different times. Of the 59 blocks that broke these dry spells, 37 were mined by, or 62.7%. The hashrate during these dry spells is otherwise about 2.0-2.5 EH/s, which indicates that is moving about 4 EH/s over to BCH during long dry spells -- that's about 100% of their SHA256 hashrate. This behavior started on Feb 1st. There were 175 blocks that took at least 1 hour to mine between Dec 15 and Jan 31st, and of those, only 6 (3.4%) were mined by Before Feb 1st, was actively *avoiding* unprofitable mining. Now, they're actively seeking it out during the worst dry spells with the intent to prevent confirmation times from getting too long. During dry spells, it's usually around 10% more profitable to mine BTC than to mine BCH, so each time does this, they're losing around $440 in revenue. That's about $16,400 so far, or around $20,000 per month. I think deserves a big thank you for doing this. I also think the BCH community needs to fix the difficulty adjustment algorithm, since that's the reason why we're getting these dry spells in the first place. Teaser: i'm working on a video and/or article explaining the problem and how we can fix it. This discovery came while doing research for that. Expect it to be published within the next few days. Below is a list of all blocks since 613500 that took more than 1 hour to mine (according to their timestamps), what the exact delay was (in seconds), and whether they were mined by, if anyone is curious. I also have data on who mined the rest, but I omitted it from this list to make it easier to read. ( and were too hard to visually distinguish from each other.) If anyone wants the full list, let me know and I'll reformat it and post it in a comment. Height Delay(s) By 623400 3622 623261 3659 623260 5348 623248 4208 BTC.TOP 623232 3712 BTC.TOP 623084 3756 BTC.TOP 622970 3974 622951 3915 622942 4481 BTC.TOP 622890 3626 622873 4082 BTC.TOP 622787 3933 BTC.TOP 622735 4194 BTC.TOP 622722 3755 BTC.TOP 622721 4001 BTC.TOP 622638 5511 BTC.TOP 622588 3964 622566 4057 BTC.TOP 622440 4474 622416 4085 622354 3781 BTC.TOP 622289 3647 BTC.TOP 622201 4125 BTC.TOP 622191 4280 BTC.TOP 622176 4278 622175 3722 BTC.TOP 622039 4265 BTC.TOP 621995 4500 BTC.TOP 621979 4181 BTC.TOP 621920 4141 621891 3752 BTC.TOP 621890 3960 621772 4492 BTC.TOP 621683 3720 BTC.TOP 621662 4126 621534 4192 621464 3962 621440 5588 BTC.TOP 621382 4773 BTC.TOP 621293 4651 621292 3689 BTC.TOP 621236 3895 BTC.TOP 621201 3729 BTC.TOP 621147 4196 621144 3918 BTC.TOP 621130 4305 621107 3783 621079 4236 BTC.TOP 620996 6075 BTC.TOP 620995 4969 BTC.TOP 620962 3605 620926 3671 BTC.TOP 620903 5031 BTC.TOP 620813 3652 620655 3966 BTC.TOP 620643 4319 620488 4058 BTC.TOP 620485 4191 BTC.TOP 620332 5128 BTC.TOP 620331 6599 620328 4893 620325 6391 620314 7993 BTC.TOP 620181 10053 620178 4315 620175 6068 620168 8694 620026 19146 620020 5153 619875 6057 619871 6157 619869 4263 619808 3739 BTC.TOP 619736 3652 619722 7656 619674 3794 619575 8683 BTC.TOP 619539 3702 619430 4840 619423 9801 619389 3862 619377 3728 BTC.TOP 619351 4256 619350 4981 619349 5123 619281 10944 619280 3861 619279 4983 619276 6631 619238 8044 619227 3968 619203 14757 619189 5632 619127 10043 619125 5725 619090 7957 619080 4153 619068 4711 619067 4028 619054 6274 618985 6294 618976 4737 618944 4749 618939 4651 618922 4457 618907 5814 618903 6563 618833 5929 618831 4267 618830 7402 618792 4970 618791 5590 618756 3720 618755 11999 618686 7450 618683 4016 618629 6287 618608 4324 618606 3881 618537 7128 618531 4305 618493 5356 618480 8288 618384 10289 618347 8620 618331 7653 618327 3707 618279 6803 618232 3703 618198 4941 618196 3983 618165 4349 618128 9798 618081 6702 618047 3945 617980 7241 617935 9041 617899 9446 617898 4250 617878 8215 617838 4656 617833 4040 617832 5284 617787 6471 617749 4897 617727 3874 617726 3825 617692 3624 617691 4387 617686 3948 617638 3924 617636 4978 617615 4285 617567 3844 617565 6014 617542 5892 617533 3694 617490 4706 617488 3744 617430 3601 617416 4163 617341 4505 617337 4294 617214 5503 617182 4237 617141 4712 617039 4797 616989 5527 616891 5630 616890 4253 616878 4351 616837 6353 616808 6856 616739 3901 616684 6937 616643 3985 616511 5349 616442 3677 616441 3861 616431 4102 616409 3635 616366 4009 616357 5015 616220 8265 616203 3974 616138 4263 616072 3968 616068 4223 615986 3935 615916 6312 615777 4223 615689 4657 615623 5685 615622 4249 615596 5842 615539 3608 615495 5406 615469 5132 615324 4113 615320 7684 615293 8609 615178 7036 615031 3688 615011 4059 614949 4780 614931 4492 614854 3883 614750 5807 614708 4918 614593 3813 614435 4404 614333 4273 614325 4714 614274 5628 614249 3962 614125 7269 614105 4806 614055 4164 613973 9656 613949 3879 613894 5474 613881 4014 613830 3883 613802 4842 613778 3633 613741 3842 BTC.TOP 613693 4127 BTC.TOP 613654 4900 613597 3656 613579 4159 613531 3821 613505 4027 613500 4529

posted by /u/jtoomim in /r/btc on February 25, 2020 18:41:17

We recently discovered a bug in p2pool that requires an emergency hard fork. There was a bug in the address handling code that failed to zero-pad pubkeyhashes in output scripts. If the first byte of the script should have been 0x00, that byte was simply omitted. This resulted in 1/256th of all addresses resulting in unspendable scripts, and coins being effectively burned. This bug only affected the addresses and outputs in question; the blocks containing these unspendable outputs were still valid, and all other users continued to get paid normally. As a result of this bug, two Bitcoin Cash blocks contained unspendable payments of approximately 0.9 BCH each. I am unaware of any fund losses occurring on any other chains. Since this bug affects the contents of the coinbase transaction, and since all nodes in p2pool need to be able to agree exactly on the contents of the coinbase transaction at any point in the share chain, fixing this bug is a hard fork of the p2pool share chain. Fixing this bug does not require any forks on BCH or BTC; only p2pool is affected. I have published an updated version of p2pool (v35) in the [master branch]( on my github. If you're already using the master branch of my code, a simple `git pull` and restart of the node should suffice. Otherwise, there are instructions on the page linked above for how to create a fresh installation. This bug should affect all coins on p2pool, including BTC and LTC. Users of p2pool on any cryptocurrency should update immediately. On BCH, the fork is expected to get locked in tonight or early tomorrow morning, and the fork is expected to take place on Tuesday or Wednesday. **This does not affect anyone who is not mining on P2pool.**

posted by /u/jtoomim in /r/btc on February 9, 2020 19:39:26 is rolling out [a new service]( in two days which I think is pretty exciting. It's basically a replacement for LocalBitcoins for BCH, but in a custodianless form. never holds or controls your funds, but they're still able to provide escrow/dispute resolution services via a very clever smart contract. This service is very similar to [LocalEthereum](, which depended on Ethereum's vast programmability in order to provide blind escrow to its buyers and sellers. But when BCH added OP_CHECKDATASIG in November 2018, BCH gained the ability to pull off a lot of smart contract tricks that were previously outside Bitcoin's reach. (Unfortunately, this service will not be available for standard Bitcoin (BTC) due to this dependency.) In the past, escrow on Bitcoin has been done via multisig operations. But OP_CHECKDATASIG is [way better]( > Multi-sig is another type of Bitcoin P2SH script that uses the OP_CHECKMULTISIG op code. The difference between OP_CHECKDATASIG and multi-signature wallets is that the second party does not need to agree on how and when the BCH is sent. The oracle (which can be the seller, buyer, or arbitrator) doesn’t have the ability to place conditions on the transaction. > With a multi-signature transaction, not only must both parties agree on exactly how the BCH is spent, but **both parties must be online at the same time** to sign the transaction. This is because both parties must sign the full transaction including all outputs and inputs. With a Script that uses OP_CHECKDATASIG instead, the oracle simply needs to give the winner a signature, **which they can use at any time** to unlock the BCH in any way they choose. They also use end-to-end encryption between buyer and seller based on the Open Whisper protocol, just like WhatsApp, so you don't have to worry about an eavesdropper learning about your exchange and kidnapping you for ransom, or the Chinese government learning that you're abandoning your government-approved yuan and trying to get free of their baleful control like the traitorous rebel that you are. Anyway, I'm mostly geeking out on the technology, but I thought you guys might appreciate learning of a new KYC-free anonymous on/offramp into the cryptocurrency ecosystem. Note: I am not affiliated with in any way, although I am a Bitcoin Cash protocol developer and proponent.

posted by /u/jtoomim in /r/CryptoCurrency on June 1, 2019 10:30:54

A common meme is that Roger Ver, Jihan Wu, and Craig Wright are the ones responsible for the creation of Bitcoin Cash. This is untrue. Those are figureheads who played a role in popularizing or (for Bitmain) funding later development, but they played almost no part until Bitcoin Cash development was long since underway. The Bitmain UAHF contingency plan blog post was made on 2017-06-14. This was the first event in Bitcoin Cash's history that reached a wide audience, but it came 15 months after work on what later became Bitcoin Cash began. [The public decision to do a minority hard fork]( happened 2016-07-31, and was spearheaded by singularity87 and ftrader. ftrader did most of the initial development, which he had started in March. Even back then, the plan to fork before Segwit's activation was clear: > [I want to fork before SegWit activates]( Bitmain was merely *joining* their effort in 2017, not starting it. Bitcoin Cash evolved out of the Minimum Viable Fork project that ftrader/Freetrader [started in March 2016]( He [blogged about it quite a bit]( If you read through them, you can see his initial prototype was built on Bitcoin Classic, then he [switched to Bitcoin Unlimited for half a year]( The first mention of Bitcoin ABC is from [May 7, 2017]( The ABC project was started by deadalnix, but with mostly the same goal as ftrader's work yet using Core as the base instead of BU or Classic. At that time, ABC was just Core 0.14 minus RBF and Segwit; it didn't yet have any blocksize changes. Freetrader made the first prototype of Bitcoin ABC with a blocksize limit other than 1 MB himself on or before [May 21, 2017](, while still working in parallel on the Bitcoin Unlimited version of the MVF. Ftrader and deadalnix continued to work on Bitcoin ABC for a couple months before Bitmain even mentioned their support for the contingency plan, and their contingency plan was basically the same as what ftrader and singularity87 had proposed back in March 2016 (but with more refinements and details worked out) -- perform a minority hard fork from BTC before Segwit activates to increase the blocksize limit, and do so in a way that ensures as clean a split as possible. Bitcoin ABC was announced to the public on July 1st, 2017, [by ftrader]( and [by deadalnix](, about 2-3 months after deadalnix and ftrader began working on it, and 2 weeks after Bitmain announced its intent to support the UAHF. On the date that BCH forked, there were four separate compatible full-node clients: 1. Bitcoin ABC, developed mostly by Amaury Sechet/deadalnix and freetrader; 2. Bitcoin Unlimited, developed by the BU team (Andrew Stone/thezerg, Peter Tchipper, Andrea Suisani/sickpig, Peter Rizun, freetrader, and a few others, and funded by [anonymous donors in 2016]( for their Emergent Consensus proposal; 3. Bitcoin Classic, originally developed by Gavin Andresen with a little help from me, and maintained by Tom Zander; and 4. Bitcoin XT, developed initially by Gavin Andresen and Mike Hearn, and later by Tom Harding/dgenr8 and dagurval Of those developers, the only ones who received money while they were working on these clients were deadalnix (Bitmain), Gavin (MIT Digital Currency Initiative), and some of the BU developers. Everybody else was a volunteer. Bitcoin Unlimited's funds did not pay for developers; it only paid for travel expenses, conferences, $20k for bug bounties, and (in 2017) servers for their Gigablock Testnet Initiative. A lot of Bitcoin Cash's early support came from Haipo Yang of ViaBTC. ViaBTC's exchange was the first to offer BCH trading pairs, and ViaBTC's pool was the first public pool to support BCH. Haipo Yang also was the one who coined the name Bitcoin Cash. ViaBTC played a much larger role in BCH's development than Roger Ver or Craig Wright, and had a comparable amount of influence to Bitmain. However, this was not obvious on the outside, because Haipo Yang is the kind of person who quietly builds things that work, instead of just being a prominent talking head like Craig Wright and Roger Ver are. Roger himself actually didn't fully support Bitcoin Cash until *after* the fork. Initially, [he had his hopes up for Segwit2x](, as did I. His name was conspicuously missing in an [Aug 1, 2017 article]( about who supports Bitcoin Cash. It was only after Segwit2x failed on [Nov 8, 2017]( that he started to support BCH. Craig Wright on the other hand [did praise the Bitcoin Cash initiative early on](, probably largely because [he hated Segwit]( for some reason. But he didn't do anything to help create BCH; he only spoke in favor of it. (I really wish he hadn't. His involvement in BCH fostered a lot of false beliefs among Bitcoin Cash's userbase, like the belief that selfish mining doesn't exist. We were only able to get rid of his crazed followers when BSV forked off. I'm very grateful that happened. But I digress.) These are the people who created Bitcoin Cash. It's easy to place all the credit/blame on the most vocal figureheads, but the marketing department does not create the product; they just sell it. If you weren't around during the product's development, it's hard to know who actually built the thing and who was just a bandwagon joiner. CSW and Roger just hopped on the bandwagon. Jihan Wu and Haipo Yang joined the crew of the bandwagon and contributed substantially to its development and survival, but by the time they had joined the bandwagon was already in motion. The real instigators were the community members like ftrader, deadalnix, singularity87, the BU crew, and a few others who contributed in various ways that I haven't documented. For those of you who played a role or know of someone else who did but whom I didn't mention in this post, please make a comment below so we can all hear about it.

posted by /u/jtoomim in /r/btc on June 1, 2019 05:25:07

I've been collecting some compression efficiency data on BCH mainnet blocks with Xthinner for the last 1.5 days, and thought I would share some results. Of the last 200 blocks, there were 13 instances in which the recipient was missing one or more transactions and had to fetch with a round trip, for a 6.5% fetch rate. I calculated the compression efficiency in 3 separate ways: 1. With all data sent by Xthinner, including the shortID segment, the missing transactions, the coinbase transaction, and the block header; 2. With the shortIDs, coinbase, and header, but not the missing transactions; and 3. With only the shortIDs. The mean compression rates for these 201 blocks were as follows: 99.563% without cb+header+missing 99.385% with cb+header w/o missing 99.209% with everything In terms of bits/tx, those numbers are: 14.021 bits/tx without cb+header+missing 19.721 bits/tx w cb+header w/o missing 25.348 bits/tx with everything The average block size during this test was 327 tx/block or 131 kB/block. I expect these numbers to tend towards 12 bits/tx asymptotically as block sizes increase. These numbers were calculated using the sum of the Xthinner message sizes divided by the sum of the block sizes, rather than the mean of the individual blocks' compression rates. This means that my mean compression numbers are weighted by block size. In comparison, /u/bissias reported [yesterday]( that Graphene got a *median* compression (with everything) of 98.878% on these dinky mainnet BCH block sizes. Graphene does much better at large block sizes, though, getting up to 99.88% on the biggest blocks, which is about 2x-3x better than the best Xthinner can do. Except for the missing transactions, there were 0 errors decoding Xthinner messages. Specifically, of the last 201 blocks, there were 0 instances of Xthinner encoding too little information to disambiguate between transactions in the recipient's mempool, and there were 0 instances of checksum errors during decoding. (This is normal and expected for normal operation. In adversarial cases or extreme stress-test scenarios with desynced mempools, these numbers might go up, but if they do they only cause an extra round trip. The full dataset of 201 blocks (with lame formatting) can be found [here]( Astute observers might notice that this performance result is much better than [what I first reported](, in which around 75% of blocks had "missing" transactions. It turns out that these were actually decoding ambiguities caused by my encoder having an [off-by-one error]( when finding the nearest mempool neighbor. Oopsies. Fixed. I also changed my test setup to have better and more realistic mempool synchrony. These two changes lowered the missing transaction rate to about 6.5% of blocks. If anyone wants to dig into the code or play around with it, you can find it [here]( Keep in mind that there may still be remote crash or remote code execution vulnerabilities, so don't run this code on anything you want to not get hacked.

posted by /u/jtoomim in /r/btc on April 24, 2019 01:52:32

A few hours ago, I fixed the last showstopping bug in my Xthinner code and got it running between two of my ABC full nodes on mainnet. One node serves as a bridge to the rest of the world, receiving Compact Blocks and transmitting Xthinner. The other is connected to no other nodes except this bridge. The first block transmitted by Xthinner was 577310. My nodes had just started when that block was published, so it was transmitted with only 24 transactions in mempool out of 2865 total in the block. It worked nonetheless. Xthinner has worked on every block since then, with no failures, and with no block taking more than 1.5 networking round trips. Most non-tiny blocks have gotten about 99% compression after fetching missing transactions, or about 99.3% before fetching. Eight blocks have been complete on arrival without any missing transaction fetching (0.5 round trips), and 24 blocks have required a round trip to fetch missing transactions. I will probably make an alpha code release soon so that people can play around with it. The code still has some known bugs and vulnerabilities, though, so don't run it on anything you want to stay running. Here's the best-performing block so far, and one of the biggest: 2019-04-08 09:27:53.076818 received: xtrblk (1660 bytes) peer=0 2019-04-08 09:27:53.077210 Filling xtrblk with mempool size 841 2019-04-08 09:27:53.077644 xtrblk: 841 tx, 1 prefilled 2019-04-08 09:27:53.077707 Received complete xthinner block: 000000000000000002f914b0c6afb568bec86b9a5166a5023f466c5ee7100e90. 2019-04-08 09:27:53.136257 UpdateTip: new best=000000000000000002f914b0c6afb568bec86b9a5166a5023f466c5ee7100e90 height=577332 version=0x20800000 log2_work=87.837579 tx=269896356 date='2019-04-08 09:27:30' progress=1.000000 cache=10.6MiB(79763txo) warning='40 of last 100 blocks have unexpected version' This was a 841 tx, 363 kB block transmitted in 1660 bytes. That's 99.54% compression or 15.79 bits/tx. Uncoincidentally, this was also one of the largest blocks so far, with 23 minutes elapsed since the prior block. Sizes of the xtrblk messages: 2019-04-08 06:17:48.394401 received: xtrblk (4511 bytes) peer=0 2019-04-08 06:34:40.219904 received: xtrblk (1249 bytes) peer=0 2019-04-08 06:50:25.290082 received: xtrblk (1209 bytes) peer=0 2019-04-08 06:51:49.082137 received: xtrblk (282 bytes) peer=0 2019-04-08 07:04:02.028427 received: xtrblk (416 bytes) peer=0 2019-04-08 07:09:44.603728 received: xtrblk (1235 bytes) peer=0 2019-04-08 07:15:32.338061 received: xtrblk (351 bytes) peer=0 2019-04-08 07:17:25.983502 received: xtrblk (839 bytes) peer=0 2019-04-08 07:19:38.947229 received: xtrblk (498 bytes) peer=0 2019-04-08 07:21:22.099113 received: xtrblk (404 bytes) peer=0 2019-04-08 07:37:20.573195 received: xtrblk (569 bytes) peer=0 2019-04-08 07:38:41.106193 received: xtrblk (1259 bytes) peer=0 2019-04-08 07:46:40.656947 received: xtrblk (764 bytes) peer=0 2019-04-08 07:52:40.203599 received: xtrblk (591 bytes) peer=0 2019-04-08 08:01:30.239679 received: xtrblk (776 bytes) peer=0 2019-04-08 08:26:06.212842 received: xtrblk (287 bytes) peer=0 2019-04-08 08:37:10.882075 received: xtrblk (2177 bytes) peer=0 2019-04-08 08:39:05.003971 received: xtrblk (392 bytes) peer=0 2019-04-08 08:40:27.191932 received: xtrblk (274 bytes) peer=0 2019-04-08 08:53:57.338920 received: xtrblk (1294 bytes) peer=0 2019-04-08 08:54:44.033299 received: xtrblk (344 bytes) peer=0 2019-04-08 09:04:55.541082 received: xtrblk (947 bytes) peer=0 2019-04-08 09:27:53.076818 received: xtrblk (1660 bytes) peer=0 2019-04-08 09:39:21.527632 received: xtrblk (878 bytes) peer=0 2019-04-08 09:48:57.831915 received: xtrblk (836 bytes) peer=0 2019-04-08 09:49:18.074036 received: xtrblk (243 bytes) peer=0 2019-04-08 09:52:09.949254 received: xtrblk (474 bytes) peer=0 2019-04-08 10:05:35.192227 received: xtrblk (451 bytes) peer=0 2019-04-08 10:12:37.671585 received: xtrblk (1317 bytes) peer=0 2019-04-08 10:12:40.761272 received: xtrblk (294 bytes) peer=0 2019-04-08 10:13:10.548404 received: xtrblk (278 bytes) peer=0 2019-04-08 10:17:06.108110 received: xtrblk (512 bytes) peer=0 Sizes of the fetched missing transactions: 2019-04-08 06:17:48.410703 received: xtrtxn (842930 bytes) peer=0 2019-04-08 06:34:40.221133 received: xtrtxn (5691 bytes) peer=0 2019-04-08 06:50:25.291309 received: xtrtxn (517 bytes) peer=0 2019-04-08 07:04:02.029652 received: xtrtxn (3461 bytes) peer=0 2019-04-08 07:09:44.604922 received: xtrtxn (744 bytes) peer=0 2019-04-08 07:15:32.339450 received: xtrtxn (1155 bytes) peer=0 2019-04-08 07:17:25.984684 received: xtrtxn (3337 bytes) peer=0 2019-04-08 07:19:38.948412 received: xtrtxn (654 bytes) peer=0 2019-04-08 07:21:22.100418 received: xtrtxn (3510 bytes) peer=0 2019-04-08 07:37:20.574477 received: xtrtxn (3990 bytes) peer=0 2019-04-08 07:38:41.107558 received: xtrtxn (519 bytes) peer=0 2019-04-08 07:52:40.204659 received: xtrtxn (2364 bytes) peer=0 2019-04-08 08:01:30.240842 received: xtrtxn (275 bytes) peer=0 2019-04-08 08:26:06.214200 received: xtrtxn (274 bytes) peer=0 2019-04-08 08:39:05.005097 received: xtrtxn (273 bytes) peer=0 2019-04-08 08:53:57.340233 received: xtrtxn (514 bytes) peer=0 2019-04-08 08:54:44.034397 received: xtrtxn (1243 bytes) peer=0 2019-04-08 09:04:55.542438 received: xtrtxn (420 bytes) peer=0 2019-04-08 09:39:21.528842 received: xtrtxn (811 bytes) peer=0 2019-04-08 09:49:18.075155 received: xtrtxn (274 bytes) peer=0 2019-04-08 09:52:09.950762 received: xtrtxn (10478 bytes) peer=0 2019-04-08 10:05:35.193791 received: xtrtxn (8248 bytes) peer=0 2019-04-08 10:12:40.762645 received: xtrtxn (1741 bytes) peer=0

posted by /u/jtoomim in /r/btc on April 8, 2019 06:32:35

Xthinner is a [new block propagation protocol]( which I have been working on. It takes advantage of LTOR to give about 99.6% compression for blocks, as long as all of the transactions in the block were previously transmitted. That's about 13 bits (1.6 bytes) per transaction. Xthinner is designed to be fault-tolerant, and to handle situations in which the sender and receiver's mempools are not well synchronized with gracefully degrading performance -- missing transactions or other decoding errors can be detected and corrected with one or (rarely) two additional round trips of communication. My expectation is that when it is finished, it will perform about 4x to 6x better than Compact Blocks and Xthin for block propagation. Relative to Graphene, I expect Xthinner to perform similarly under ideal circumstances (better than Graphene v1, slightly worse than Graphene v2), but much better under strenuous conditions (i.e. mempool desynchrony). The current development status of Xthinner is as follows: 1. Python proof-of-concept encoder/decoder -- [done]( 2018-09-15 2. Detailed informal writeup of the encoding scheme -- [done]( 2018-09-29 3. Modify TxMemPool to allow iterating on a view sorted by TxId -- done 2018-11-26 4. Basic C++ segment encoder -- done 2018-11-26 5. Basic c++ segment decoder -- done 2018-11-26 6. Checksums for error detection -- done 2018-12-09 7. Serialization/deserialization -- done 2018-12-09 8. Prefilled transactions, coinbase handling, and non-mempool transactions -- done 2018-12-25 9. Missing/extra transactions, re-requests, and handling mempool desynchrony for segment decoding -- done 2019-01-12 10. Block transmission coupling the block header with one or more Xthinner segments -- 50% 2019-01-12 11. Missing/extra transactions, re-requests, and handling mempool desynchrony for block decoding 12. Integration with Bitcoin ABC networking code 13. Networking testing on regtest/testnet/mainnet with real blocks 14. Write BIP/BUIP and formal spec 15. Bitcoin ABC pull request and begin of code review 16. Unit tests, performance tests, benchmarks -- started 17. Bitcoin Unlimited pull request and begin of code review 18. Alpha release of binaries for testing or low-security block relay networks 19. Merging code into ABC/BU, disabled-by-default 20. Complete security review 21. Enable by default in ABC and/or BU 22. (Optional) parallelize encoding/decoding of blocks Following is the debugging output from a test run done with coherent sender/recipient mempools with a 1.25 million tx block, edited for readability: Testing Xthinner on a block with 1250003 transactions with sender mempool size 2500000 and recipient mempool size 2500000 Tx/Block creation took 262 sec, 104853 ns/tx (mempool) CTOR block sorting took 2467 ms, 987 ns/tx (mempool) Encoding is 1444761 pushBytes, 2889520 1-bit commands, 103770 checksum bytes total 1910345 bytes, 12.23 bits/tx Single-threaded encoding took 2924 ms, 1169 ns/tx (mempool) Serialization/deserialization took 1089 ms, 435 ns/tx (mempool) Single-threaded decoding took 1912314 usec, 764 ns/tx (mempool) Filling missing slots and handling checksum errors took 0 rounds and 12 usec, 0 ns/tx (mempool) Blocks match! *** No errors detected If each transaction were 400 bytes on average, this block would be 500 MB, and it was encoded in 1.9 MB of data, a 99.618% reduction in size. Real-world performance is likely to be somewhat worse than this, as it's not likely that 100% of the block's transactions will always be in the recipient's mempool, but the performance reduction from mempool desychrony is smooth and predictable. If the recipient is missing 10% of the sender's transactions, and has another 10% that the sender does not have, the transaction list is still able to be successfully transmitted and decoded, although in that case it usually takes 2.5 round trips to do so, and the overall compression ratio ends up being around 68% instead of 99.6%. Once Xthinner is finished, I intend to start working on Blocktorrent. Blocktorrent is a method for breaking a block into small independently verifiable chunks for transmission, where each chunk is about one IP packet (a bit less than 1500 bytes) in size. In the same way that Bittorrent was faster than Napster, Blocktorrent should be faster than Xthin(ner). Currently, one of the big limitations on block propagation performance is that a node cannot forward the first byte of a block until the last byte of the block has been received and completely validated. Blocktorrent will change that, and allow nodes to forward each IP packet shortly after that packet was received, regardless of whether any other packets have also been received and regardless of the order in which the packets are received. This should dramatically improve the bandwidth utilization efficiency of nodes during block propagation, and should reduce the block propagation latency for reaching the full network quite a lot -- my current estimate is about 10x improvement over Xthinner. Blocktorrent achieves this partial validation of small chunks by taking advantage of Bitcoin blocks' Merkle tree structure. Chunks of transactions are transmitted in a packet along with enough data from the rest of the Merkle tree's internal nodes to allow for that chunk of transactions to be validated back to the Merkle root, the block header, and the mining PoW, thereby ensuring that packet being forwarded is not invalid spam data used solely for a DoS attack. (Forwarding DoS attacks to other nodes is bad.) Each chunk will contain an Xthinner segment to encode TXIDs. My performance target with Blocktorrent is to be able to propagate a 1 GB block in about 5-10 seconds to all nodes in the network that have 100 Mbps connectivity and quad core CPUs.

posted by /u/jtoomim in /r/btc on January 12, 2019 21:24:06

I'd like to do some analysis of block propagation and block validation performance during the stress test, and I need some data. Can you give me yours? Specifically, I need ~/.bitcoin/debug.log files from as many different nodes as possible. I'm especially interested in BUSV and Bitcoin SV nodes, since the BCH test's spam generation was lackluster. Just sending me a link to your debug.log file is sufficient. But if you want to help me out a little more, I would appreciate it if you can rename the files so that they include your node's IP address or another unique identifier of your choice, like so: debug- meta- or debug-jtoomim-moseslake-busv.log meta-jtoomim-moseslake-bu.txt The meta files are for any human-readable information you want to include about your nodes, like 1. CPU type 2. RAM 3. SSD? HDD? 4. Location 5. Bandwidth 6. VPS or dedicated 7. dbcache setting 8. Whether ntpd or another clock sync method was correctly configured during the test 9. IP address (if you're willing to share it publicly) If you have any other data that you've collected, such as CPU usage statistics or bandwidth monitoring data, feel free to give me a link for that as well. If you want to not post any of this publicly, you may email me at jATtoomDOTim and tell me which (meta-)data you want me to not repeat without permission. If anyone else wants to look through this data, let me know and I will put it all in a public repository somewhere. The intent is to do an analysis similar to [the one I did for the Sep 1st stress test]( I believe /u/rinexc managed to log some stratum header information from SVPool, and we also saw 6 orphaned blocks from SVPool alone, plus quite a few from other pools, so I think there's going to be some interesting stuff to find here. I also have an analysis of an incomplete, early dataset [here](

posted by /u/jtoomim in /r/btc on November 28, 2018 18:30:25

I made a simple web front-end for my Toomim Time anti-reorg algorithm simulator. You can mess with it here: It all runs server-side, so if too many people are trying to use it at any given time, it will get slow and buggy. If that happens, you can run a copy locally. This algorithm penalizes chains heavily if the first few blocks after the fork were released with a delay after the honest chain, and penalizes the chain lightly if blocks long after the fork were delayed relative to the other chain. If the defender chain is shorter than the other, we need to calculate what penalties the defender chain would receive from the blocks that the defender would need to generate in order to catch up. To do this, we add "hypothetical blocks" with the current time. These hypothetical blocks are ignored for calculating the PoW done, but they do contribute to the penalty. The increase in the penalty for the shorter chain over time is the main mechanism by which the algorithm eventually converges to the most-PoW chain. Some simplified pseudocode for how the scores are calculated: root = last_common_ancestor(chaintip, enemytip) blk = chaintip while blk.pow < opponent.pow: # need to add hypothetical blocks blk = Block(parent=blk, firstseen=opponent.firstseen, tag='-TBD') chain_with_hypotheticals = list_children_to_root(blk, root) for blk in chain_with_hypotheticals: blk.delay = max(0, block.time_received - cousin(block).time_received) blk.penalty = blk.delay / timeconstant / (blk.height - root.height) ** exponent chain_penalty = sum([block.penalty for block in chain_with_hypotheticals]) score = (chaintip.pow - root.pow) / chainpenalty Finalization is optional with this algorithm. The sim has finalization disabled by default, but you can put e.g. 10 into the finalization field if you want to see how that affects things. Finalization will only happen if (a) the defender chain is at least *n* blocks ahead of the attacker chain, and (b) the defender chain has at least 2x the score of the attacker chain at that time. The default for the simulation is to emulate an attacker that has 2x as much hashrate as the defense (i.e. can mine blocks every 300 seconds on average), and waits 1 hour before publishing any of the blocks they have mined. This is intended to emulate the scenario in which a malicious actor tries to double-spend an exchange with a 67% hashrate majority. The attacker will give up if he doesn't win within 50 hours. With these settings, the attacker's chain is adopted in about 39.7% of attempts, and most of those attacker victories are fast. The defenders only suffer >=10 block reorgs in 3.3% of all scenarios, and 79.9% of the attacker's blocks are orphaned versus 18.5% of the defender's. The full code for the algorithm is [here]( To install and run locally, use the following commands: sudo apt-get install pypy # if you can't get pypy to work on your platform, regular python works too wget sudo pypy rm sudo pip install dash sudo pip install dash-html-components sudo pip install dash-core-components sudo pip install dash-table git clone cd antiReorgSim pypy You can then view your local instance by pointing a web browser to localhost:8050.

posted by /u/jtoomim in /r/btc on November 25, 2018 22:50:09

Yesterday, I came up with a new algorithm for making secret reorg attacks very expensive and difficult to pull off. This new algorithm is designed to avoid the permanent chainsplit vulnerabilities of ABC 0.18.5 while being more effective at punishing malicious behavior. The key to the new algorithm is to punish exactly the behavior that indicates malice. First, publishing a block after another block at the same height has arrived on the network suggests malice or poor performance, and the likelihood of malice increases as the delay increases. A good algorithm would penalize blocks in proportion to how much later they were published after the competing block. Second, building upon a block that was intentionally delayed is also a sign of malice. Therefore, a good algorithm would discount the work done by blocks based not only on their own delays, but the delays that were seen earlier in that chain as well. Since the actions at the start of the fork are more culpable (as they generate the split), we want to weight those blocks more heavily than later blocks. I wrote up an algorithm that implements these features. When comparing two chains, you look at the PoW done since the fork block, and divide that PoW by a penalty score. The penalty score for each chain is calculated as the sum of the penalty scores for each block. Each block's penalty score is equal to the apparent time delay of that block^([1]), divided by 120 seconds^([2]), and further divided by the square^([3]) of that block's height^([4]) from the fork. This algorithm has some desirable properties: 1. It provides smooth performance. There are no corners or sharp changes in its incentive structure or penalty curve. 2. It converges over very long time scales. Eventually, if one chain has more hashrate than the other and that is sustained indefinitely, the chain with the most hashrate will win by causing the chain penalty score for the slower (less-PoW) chain to grow. 3. The long-term convergence means that variation in observed times early in the fork will not cause permanent chainsplits. 4. Over intermediate time scales (e.g. hours to weeks), the penalty given to secret-mining deep-reorg chains is very large and difficult to overcome even with a significant hashrate advantage. The penalty increases the longer the attack chain is kept secret. This makes attack attempts ineffective unless they are published within about 20 minutes of the attack starting. 5. Single-block orphan race behavior is identical to existing behavior unless one of the blocks has a delay of at least 120 seconds, in which case that chain would require a total of 3 blocks to win (or more) instead of just 2. 6. As the algorithm strongly punishes hidden chains, finalization becomes much safer as long as you prevent finalization from happening while there are known competitive alternate chains. However, this algorithm is still effective without finalization. I wrote up this algorithm into a Python sim yesterday and have been playing around with it since. It seems to perform quite well. For example, if the attacker has 1.5x as much hashrate as the defenders (who had 100% of the hashrate before the fork), mine in secret for 20 minutes before publishing, and if finalization is enabled after 10 blocks when there's at least a 2x score advantage, then the attacker gets an orphan rate of 49.3% on their blocks and is only able to cause a >= 10 block reorg in 5.2% of cases, and none of those happen blindly, as the opposing chain shows up when most transactions have about 2 confirmations. If the attacker waits 1 hour before publishing, the attack is even less effective: 94% of their blocks are orphaned, 95.6% of their attempts fail, 94.3% of the attacks end with defenders successfully finalizing, and only 0.6% of attack attempts result in a >= 10 block reorg. The code for my algorithm and simulator can be found on [my antiReorgSim Github repository]( If you guys have time, I'd appreciate some review and feedback. Thanks! Special thanks to Jonald Fyookball and Mark Lundeberg for reviewing early versions of the code and the ideas. I believe Jonald is working on a Medium post based on some of these concepts. Keep an eye out for it. --- Note 1: This time delay is calculated by finding the best competing chain's last block with less work than this one and the first block with more work than this one and interpolating the time-first-seen between the two. The time at which the block was fully downloaded and verified is used as time-first-seen, not the time at which the header was received nor the block header's timestamp. Note 2: An empirical constant, intended to be similar to worst-case block propagation times. Note 3: A semi-empirical constant; this balances the effect of early blocks against late blocks. The motivation for squaring is that late blocks gain an advantage for two multiplicative reasons: First, there are more late blocks than early blocks. Second, the time deltas for late blocks are larger. Both of these factors are linear versus time, so canceling them out can be done by dividing by height squared. This way, the first block has about as much weight as the next 4 blocks; the first two blocks have as much weight as the next 9 blocks; and the first (n) blocks have about as much weight as the next (n+1)^2 blocks. Any early advantage can be overcome eventually by a hashrate majority, so over very long time scales (e.g. hours to weeks), this rule is equivalent to the simple Satoshi most-PoW rule, as long as the hashrate on each chain is constant. However, over intermediate time scales, the advantage to the first seen blocks is large enough that the hashrate will likely not remain constant, and hashrate will likely switch over to whichever chain has the best score and looks the most honest. Note 4: The calculation doesn't actually use height, as that would be vulnerable to DAA manipulation. Instead, the calculation uses pseudoheight, which uses the PoW done and the fork block's difficulty to calculate what the height would be if all blocks had the fork block's difficulty.

posted by /u/jtoomim in /r/btc on November 23, 2018 15:14:28

Please make sure you have NTP installed and active, or another form of clock synchronization, and please enable debug=bench and do anything else you can think of to collect data on node performance. The stress test is expected to be pretty big. The expected transaction throughput rate on this one is something like 250 tx/sec, which is likely to saturate ABC's net_processing code. It will be informative if we can collect some data on where exactly in the code the bottlenecks are. If you know how to compile from source, it would be great to compile bitcoind with profile symbols enabled (export CXXFLAGS="-pg -O 2", I think, followed by "./bitcoind -daemon=0" -- profiling data won't be collected in daemon mode). Also, information on whether the UTXO cache and disk activity is the bottleneck will be helpful. Does dbcache=8192 help? If you have a ton of RAM, does cat ~/.bitcoin/chainstate/* >/dev/null help? (That would cache the DB at the OS level. Still uses leveldb, but no disk.) Do you see high disk tps with iostat? High iowait %? Unfortunately, I'm extremely busy right now and won't have the time I wish I had to devote to this project, so if anybody else who has time who can make specific suggestions on how to implement the things above, that would be appreciated.

posted by /u/jtoomim in /r/btc on November 16, 2018 20:56:41

Recently, [Craig Wright has claimed]( that the motivation for Bitcoin ABC's OP_CHECKDATASIGVERIFY is to allow for illegal activity on Bitcoin Cash by enabling futures markets and on-chain gambling. But there's a problem with this claim: You don't need OP_CDSV for any of those things. You can do on-chain gambling without CDSV. You can do futures contracts (e.g. for assassination of a target) without CDSV. All you need for that is for an oracle to SHA256 two secret messages, and then only reveal one of the two messages later on. The spend transaction needs to produce the secret message in order to spend the transaction. An oracle can publish two SHA256 hashes: 1. SHA_A means that JFK has been assassinated as of Jan 1st, 1970. 2. SHA_B means that JFK has not been assassinated as of Jan 1st, 1970. The oracle keeps the messages which are used to generate those hashes secret until 1970, at which time the oracle releases either MSG_A or MSG_B, such that SHA256(MSG_A) = SHA_A, and so forth. A market can then be established for transactions that pay out to a different pubkey depending on which of the two messages has been revealed. [Thanks to Thomas Bakketun for pointing this out](

posted by /u/jtoomim in /r/btc on November 10, 2018 14:59:23

I'd like to do some analysis of block propagation and block validation performance during the stress test, and I need some data. Can you give me yours? Specifically, I need ~/.bitcoin/debug.log files from as many different nodes as possible. Just sending me a link to your debug.log file is sufficient. But if you want to help me out a little more, I would appreciate it if you can rename the files so that they include your node's IP address, like so: debug- meta- The meta files are for any human-readable information you want to include about your nodes, like 1. CPU type 2. RAM 3. SSD 4. Location 5. Bandwidth 6. VPS or dedicated 7. dbcache setting 8. IP address: If you have any other data that you've collected, such as CPU usage statistics or bandwidth monitoring data, feel free to give me a link for that as well. If you want to not post any of this publicly, you may email me at jATtoomDOTim and tell me which (meta-)data you want me to not repeat without permission. If anyone else wants to look through this data, let me know and I will put it all in a public repository somewhere. Some early preliminary results show that block propagation for some of these blocks is taking [more than 10 seconds]( for a block that only took 100-200 ms to verify. As expected, block propagation is the slowest part of the process with the possible exception of [AcceptToMemoryPool](

posted by /u/jtoomim in /r/btc on September 3, 2018 16:45:39

If you watch transactions as they hit the mempool (e.g. , you'll notice that they tend to come in large batches, with several minutes elapsing in between batches. I've had running and generating transactions during this interval, and noticed that the transactions I generate usually take several minutes before they're visible on block explores or on my local node's mempool. For example, this transaction was generated by my webpage about 14 minutes before I wrote this, but when I queried Bitcoin ABC, I see it's not there yet: bch@feather:~$ abccli getrawtransaction b639cf06646a01a93f29cbc9b773755158bb712e2e5f3c10978f745e89341a39 error code: -5 error message: No such mempool transaction. ... Ten minutes later, I try again, and this time it's there: bch@feather:~$ abccli getrawtransaction b639cf06646a01a93f29cbc9b773755158bb712e2e5f3c10978f745e89341a39 0200000001c0486ca96c7a8d4f5b1ea68f419ca2c76c1ec3d0613ed11746ead1b4d1addc64000000006a473044022009f68f4c84dd7d94758c49dffb6e4ae28bf74588475352803177cbbe0e0e765c022036f01d76c176a82c645987929cf73cc80d6a3b500f1a79321be4095564431b2141210340a65a40cb472752045abf1a5990d6d85a1d6f71da7dde40dd8b15c179961b1dffffffff02460b0000000000001976a9147a1402392a64f64894296d2528cf907e4b76432488ac0000000000000000186a1673747265737374657374626974636f696e2e6361736800000000 So something is bottlenecking transactions in between their generation in javascript in my web browser and the full node network. This could just be an issue with's webservers. We don't know anything about how those servers work. But it could also be the [AcceptToMemoryPool bottleneck]( Perhaps what is happening is that a large batch of transactions comes in and fills a node's network buffers. Eventually, AcceptToMemoryPool() gets run, locks cs_main and cs_mempool, and runs through all of the transactions. The locking of cs_mempool prevents the networking threads from reading mempool and uploading the transactions to the next peer until this batch of transactions is finished processing. Once that happens, the networking code locks cs_mempool and prevents AcceptToMemoryPool from running, causing the socket reading code to fill its buffers while waiting for ATMP to run again. The process then repeats indefinitely, causing batched broadcasts of transactions instead of smooth trickles. Note: I'm not 100% sure that this is how the ATMP code and locks work. I haven't read that section for a while. But it seems likely that the ATMP bottleneck could result in transaction batching. It also seems like we're getting close to the expected ATMP bottleneck level of 100 tx/sec average (20 MB/block) that was seen in the Gigablock Testnet Initiative. It's possible that their servers were more consistently powerful than what we have on mainnet, resulting in the ATMP bottleneck being lower.

posted by /u/jtoomim in /r/btc on September 1, 2018 22:35:14

Many subreddits, like /r/pics, are oriented around posts that are memes or links, and the interesting lifetime of that post is pretty short. Other subreddits are mostly focused around open-ended discussion, like /r/btc or r/TwoXChromosomes. The current post sorting algorithms do not serve the latter category well, as often the discussions will continue on for days after the main post has left the front page. While the people who are engaged in these discussions will keep coming back to them after seeing the red envelope, there won't be any new eyes on the threads and eventually these conversational gems will be lost to the world. A new set of sorting algorithms could be helpful for that latter subreddit type. These algorithms could look at metrics of average comment upvotes, total comment upvotes, the rate at which new comments are being added, or the amount of detail in those comments. The key here is that posts should be judged in part based on the comments they attract. Keeping active discussions on the front page for as long as they are actively discussed will increase visibility for the long, thoughtful comments and insightful discussions that keep people's interest for days, and will ultimately increase the exposure for interesting ideas and in-depth research and commentary.

posted by /u/jtoomim in /r/redesign on August 31, 2018 21:30:05

Hi folks. As you know, we're going to be getting a stress test tomorrow. This is a great opportunity to collect performance data on the BCH network. However, if we're not logging data when the test happens, the opportunity will be lost. I recommend that people configure their full nodes to log extra performance information during the test. Adding debug=bench and logips=1 to your bitcoin.conf will help. Make sure you have NTP installed and running properly on your machines so that your logfile timestamps are accurate. If we can collate a bunch of log files after the event, we can see when blocks arrive at each node and get some idea for how long block propagation takes. Including information about your full node's hardware and software configuration in the collated data will be helpful. It would also be good to set up monitoring software on your servers to log aggregate bandwidth usage, CPU usage, and RAM usage. Running 'time bitcoin-cli getblocktemplate' at occasional intervals can also provide information that can be useful to miners. Please chip in with other suggestions for monitoring commands to run. We'll also need volunteers to help with data analysis for various topics, so if you're into that, nominate yourself and tell people what kind of data you want them to collect and how to collect it. Some questions we might be interested in asking about the stress test: How many transactions can we get through before things start to bog down? Which parts of the system bog down first? What kind of block propagation latency do we get? How much CPU usage on each node do we get? Do nodes crash? What getblocktemplate latency will we see? Do any miners have their systems configured to generate blocks larger than 8 MB yet? Can block explorers keep up with the load? Will the bottleneck be transaction propagation or block creation? Will mempool size inflate like a balloon? How much of a backlog will we see in the worst case? Will the spam delay real people's transactions, or will the priority systems work well at getting the most important transactions through first?

posted by /u/jtoomim in /r/btc on August 31, 2018 20:24:09

Unsurprisingly, the [Bitcoin Dominance Index]( (BDI) tends to fall during periods of high BTC transaction fees. That is, when it is expensive to send BTC, money tends to flow to altcoins in which transaction fees are cheaper. You can get a rough picture of this by opening these two charts and setting the X axis to be the same on each: The fact that this relationship exists indicates that the markets are not accurately valuing each blockchain's potential capacity -- that is, the propensity of BTC to get congested more easily than ETH, XRP, BCH, LTC, etc is not being priced in until congestion actually occurs. As a non-trading cryptocurrency user, I would rather that the markets were less dumb and myopic about this, so I am commissioning a good graphic to help educate the poor ignorant masses of day-traders. It would be much better if (1) we could have both datasets integrated into the same chart or CSV, and (2) if we could compute the rate of change (e.g. week-on-week change) in the BDI and compare that to the fees graph. I will pay anyone who submits a chart or infographic illustrating this relationship. The best submission will receive at least 0.035 BCH. Runner-ups will also receive awards proportional to quality at my discretion, probably in the 0.01 to 0.03 BCH range. Payments in non-BCH cryptos is also an option. Please post your receiving address(es) and preferred currencies along with your submitted graphic as a top-level reply to this post. All submissions must be under a permissive license (e.g. public domain, BSD, MIT, CC-BY). Submissions with raw data (e.g. csv) will gain extra points, especially if other people make submissions using your data.

posted by /u/jtoomim in /r/btc on May 3, 2018 03:23:20

75% support means 3:1 approval. 95% support means 19:1 approval. The 95% threshold is a 6.3x higher bar to meet than a 75% threshold. That is a *massive* difference. Using a 95% threshold as the decision criteria strongly biases the system towards inaction. **[Some people](** might like that. I don't. I want Bitcoin to be able to adapt, grow, and progress in a reasonable fashion. I don't want to see Bitcoin become surpassed by some altcoin like Litecoin or Doge simply because we chose a threshold that makes progress unfeasible. Furthermore, if we used the 95% threshold for a contentious and divisive issue, it might encourage the supermajority to engage in vote manipulation. If, for example, 10% of the hashpower opposed the fork, it would be quite easy for the majority faction to intentionally orphan and 51% attack all of the blocks mined by that 10% minority. I find that scenario distasteful, and prefer to avoid it. The higher we set that threshold, the stronger the incentive to perform this attack becomes. At 95%, it seems pretty strong to me. At 75%, it's weak enough that I doubt it will be an issue. Using a high threshold like 95% gives excessive power to minorities., for example, controls more than 5% of the hashrate, so they alone could block such a fork. Most people here don't even know who runs, and have no idea what they stand for. Do we really want to rest the fate of Bitcoin in the decision of such a small entity? (Note: actually supports the hard fork. It's just an example.) Using 95% for a non-controversial fork is a good idea. When there's no doubt about whether the fork *will* be activated, and the only question is *when* it will be activated, then the optimal activation threshold is relatively high: the costs (in terms of old nodes failing to fully validate) of a fork are minimized by activating later, and the opportunity cost (in terms of the risk of never activating, or of delaying a useful activation) is small. For a controversial fork, the optimal activation threshold is much lower: the opportunity cost (the risk of making an incorrect democratic decision) is much greater. In order to mitigate the cost of early activation (while full node support may still be low), we have a few tools available. First, there's the grace period. We can give everyone some time after the fork has been triggered (via 75% vote) and before the new rules are active. In the current version of Classic, that time is 1 month. Second, we can use the alert system and media channels to warn everybody of the coming fork and encourage them to upgrade. Third, we can coordinate with miners to not switch enough mining power on to cross the 75% threshold if it appears that full node adoption is lagging behind. With these things in mind, I think that the 75% threshold is about right for this kind of thing, and so I vote to not change it.

posted by /u/jtoomim in /r/Bitcoin_Classic on January 14, 2016 02:48:21

I have a few suggestions on how to vote in in a helpful manner: 1. Vote against incomplete proposals. If a proposal doesn't have enough information for you to evaluate whether it's a good idea, oppose it mildly. 2. Read the proposals carefully before voting strongly on them. If you want to make a quick judgment call, that's fine, but only indicate slight support or slight opposition. 3. Think about the code and the practical aspects of including it. Don't vote for features just because you think they have advantages. Think about the costs (e.g. the amount of time it would take to implement them) too. 4. On pull requests, your vote is whether the PR should be merged *now*, not whether it should be merged at all. Change your votes over time. If you think some bug should block the merge, vote against and include a Con specifying why. 5. If the page describes a proposal inaccurately, vote against it and note the inaccuracy in the description in a Con. (E.g. [this SegWit proposal]( currently assumes 4x capacity gain for SegWit, which is simply false.). 6. If a proposal has inaccuracies in the description, let me, /u/toomim, /u/Quixotic_K, or /u/tkriplean know and we will make the page publicly editable. (Maybe we should make editability the default, and only lock topics if they're getting vandalized?) 7. Use the proportional voting system. Don't go all-for or all-against something unless it's really the most important thing on the whole page. 8. Try to keep things neat and tidy. 9. This is not a magical Bitcoin wishlist. Putting far-out stuff on here is not going to make it happen, it's just going to clutter the interface for people who are trying to use it for real work. 10. We need to think of a way in which we can remove stuff that's either dumb or out-of-date without being censorshippy. Suggestions welcome. 11. If you vote strongly for or against something, please add pro and con points that explain your reasoning. This is particularly important for when you strongly oppose something, like a pull request. /u/tkriplean /u/tkriplean /u/toomim /u/Quixotic_K If you have ideas for how we can use this system better, please discuss them below.

posted by /u/jtoomim in /r/Bitcoin_Classic on January 13, 2016 15:59:04

Travis Kriplean, Kevin Miniter, and Mike Toomim have developed a tool for collecting and comparing opinions from large groups of people on political issues. It's called It's awesome. allows you to quickly view a histogram of opinions for each proposal to see how much support it has, and also has features for in-depth exploration of the reasons why each person or subgroup supports or opposes each proposal. We have put together a webpage for Bitcoin using this technology at Please use it. Contribute your opinions, add your reasons for your positions, and when necessary create new proposals and descriptions. You can learn how works by watching the 2 minute video on the main page, Travis is currently working on an opt-in validation feature to make vote manipulation difficult. We will be asking users to submit a photo of themselves with a handwritten note including 1. Your username on 2. The current date 3. The text "" If you have the time to submit a photo for your account using this format (like, that would be awesome. We will also add a check-box to filter out users who have not submitted this validation. This way, people who wish to be anonymous can make their opinions known, but those who suspect vote manipulation can exclude the anonymous cowards. We can also do verification by reddit if you have a reddit history that exceeds some arbitrary and not-yet-decided-upon threshold. Send me a message if you want to do this.

posted by /u/jtoomim in /r/bitcoinxt on January 2, 2016 22:21:01

Travis Kriplean, Kevin Miniter, and Mike Toomim have developed a tool for collecting and comparing opinions from large groups of people on political issues. It's called It's awesome. allows you to quickly view a histogram of opinions for each proposal to see how much support it has, and also has features for in-depth exploration of the reasons why each person or subgroup supports or opposes each proposal. We have put together a webpage for Bitcoin using this technology at Please use it. Contribute your opinions, add your reasons for your positions, and when necessary create new proposals and descriptions. You can learn how works by watching the 2 minute video on the main page, Travis is currently working on an opt-in validation feature to make vote manipulation difficult. We will be asking users to submit a photo of themselves with a handwritten note including 1. Your username on 2. The current date 3. The text "" If you have the time to submit a photo for your account using this format (like, that would be awesome. We will also add a check-box to filter out users who have not submitted this validation. This way, people who wish to be anonymous can make their opinions known, but those who suspect vote manipulation can exclude the anonymous cowards. We can also do verification by reddit if you have a reddit history that exceeds some arbitrary and not-yet-decided-upon threshold. Send me a message if you want to do this.

posted by /u/jtoomim in /r/Bitcoin on January 2, 2016 22:20:15

Travis Kriplean, Kevin Miniter, and Mike Toomim have developed a tool for collecting and comparing opinions from large groups of people on political issues. It's called It's awesome. allows you to quickly view a histogram of opinions for each proposal to see how much support it has, and also has features for in-depth exploration of the reasons why each person or subgroup supports or opposes each proposal. We have put together a webpage for Bitcoin using this technology at Please use it. Contribute your opinions, add your reasons for your positions, and when necessary create new proposals and descriptions. You can learn how works by watching the 2 minute video on the main page, Travis is currently working on an opt-in validation feature to make vote manipulation difficult. We will be asking users to submit a photo of themselves with a handwritten note including 1. Your username on 2. The current date 3. The text "" If you have the time to submit a photo for your account using this format (like, that would be awesome. We will also add a check-box to filter out users who have not submitted this validation. This way, people who wish to be anonymous can make their opinions known, but those who suspect vote manipulation can exclude the anonymous cowards. We can also do verification by reddit if you have a reddit history that exceeds some arbitrary and not-yet-decided-upon threshold. Send me a message if you want to do this.

posted by /u/jtoomim in /r/btc on January 2, 2016 22:19:11

posted by /u/jtoomim in /r/btc on December 27, 2015 20:07:11

Make sure your node is [logging data properly]( The following instructions are for Debian 8.2 (jessie). You may have to make some adjustments for other OSes. Post a comment if you have trouble. Run these lines: sudo apt-get install lighttpd tcpstat sudo touch /var/www/html/debug-filtered.log.gz sudo touch /var/www/html/bw.log sudo chown `whoami`.`whoami` /var/www/html/debug-filtered.log.gz sudo chown `whoami`.`whoami` /var/www/html/bw.log You may want to do the first line by itself to get through the password prompt before pasting in the other lines. Also, you might end up using /var/www/ instead of /var/www/html/, depending on your server and configuration. Next, run `crontab -e` and add this line at the bottom: 31 */2 * * * grep -v "eject " ~/.bitcoin/testnet3/debug.log | grep -v " tx " | grep -v "received: inv" | grep -v "sending: inv" | grep -v "received: getdata" | grep -v "received getdata" | grep -v "AcceptToMemoryPool" | gzip > /var/www/html/debug-filtered.log.gz That should compress the log files down to around 1 to 5% of their original size, rewriting it every hour. This compression will take about 1 minute (every 2 hours at 31 minutes past the hour) on most machines. If it takes too much CPU on your machine, you can reduce the frequency by changing the `*/2` to something like `*/4` (once every 4 hours). Check to make sure you have the right version of tcpstat. There are two different programs with the same name: sudo tcpstat -h | grep tcpstat # should say "tcpstat version 1.5", not "tcpstat 0.1 (c) J. Taimisto 2005-2013" If you have the wrong version, you should go to and rebuild from source. Next, run this every time your machine is restarted either in a screen or in the background: giface=eth0 # unless it's not sudo tcpstat -f "port 18333" -o "%s,\t%B\n" -i $giface 0.1 -F > /var/www/html/bw.log Open port 80 on your firewall if necessary. Then tell me your IP address (and optionally add it to the [spreadsheet](, and I'll grab the log files and include them in my analyses. Edit: added AcceptToMemoryPool filter. Edit2: Changed paths to /var/www/html/ for the lighttpd default.

posted by /u/jtoomim in /r/bitcoinxt on December 1, 2015 01:50:19

Right now we have slightly less hashrate on testnet3 that supports BIP101 than we have supporting BIP65/v4 blocks. We also often have > 1 MB in our mempools. This means that the hashpower I have on BIP101 (about 4 TH/s) is creating hardforks which get abandoned as the v4 branch overtakes it. At block height 604592, a BIP101 node mined block 0000000000001a2f97411690cab82b1c5478594f9099862099de189334a661a2, a 4.56 MB byte block. (This block seems to have never made it to the block explorers.) We mined at least two other blocks on top of it, but we were eventually passed up by the v4 chain, causing the whole branch to be orphaned. I'll double our hashrate to try to make this happen some more, or possibly overtake the v4 chain. Spam appreciated. In other news, I've been having some trouble with some of my nodes. is offline. If you were relying on it for a connection, you should probably add another node. I'll put up a Google spreadsheet soon that we can all edit to add nodes and with which we can keep each other updated regarding their status. is a good place to watch the forks. I don't know if any block explorers are following us accurately. DarthAndroid and /u/toomim are working on a visualization of block propagation times that might start coming online pretty soon. **Edit: Please disable thin blocks for now. Thin blocks are buggy.** use-thin-blocks=0 **Edit2:** 2015-11-14 13:45 PST: It looks like the BIP101 fork has become stable for the last 12 hours or longer. This is probably because someone on Core shut off their hashrate for the weekend, so Core is no longer overtaking BIP101 in proof of work.

posted by /u/jtoomim in /r/bitcoinxt on November 13, 2015 18:44:10

The initial fork tests have gone well so far. We saw a lot of chaos due to how testnet works, but through it all it appears that the behavior of Bitcoin Core and BitcoinXT remained correct. We've gotten our feet wet. Now it's time to work on our toolchain so we can run these tests efficiently and accurately. Some projects: 1. We need to get a system together for collecting data and aggregating data from a large number of servers, preferably using only shell commands (like grep, tail and nc). /u/DarthAndroid has made some progress on that, which he posted in the IRC chat: DarthAndroid jtoomim: "tail -f debug.log | nc 9000" will cause a node's log to start accumulating at by node IP address, which would allow someone to go back later and parse the logs for timing info. These log files are also available via rsync. Message me or /u/DarthAndroid for a copy. Warning: they're gigabytes in size. 2. We need a better way of maintaining and simultaneously controlling multiple VPSs. Something where you can type a keystroke in one prompt, and it gets simultaneously sent or mirrored over to other VPS ssh sessions would be awesome. I haven't gotten any good ideas how to implement this. There must be some sysadmins with experience with this kind of thing, right? **Edit:** Cluster SSH is exactly what I wanted. Get it. It's awesome. 3. We need better spam generation methods. A lot of the spam generated so far has been made with a simple bash for loop. "for i in \`seq 1 10000`; do ./bitcoin-cli sendtoaddress $address 0.0001; done" kind of stuff. (The "seq 1 1000" is in backquotes, which reddit markdown sometimes(?) turns into in-line code format.) We could use more variation in spam than that, and also better generation performance. Some other people have probably been working on this, but I don't know who. Chirp in? One of the things I want to test is the ability to handle transactions that are not in chains (i.e. lots of independent transactions), whereas I think the command above generates chains. Worth looking into. **Edit**: [Check here]( for inspiration. 4. We need better spam management. When a mining node is restarted, it forgets its mempool. Gavin posted a patch that he had used before that saves it to disk, but the patch had some other stuff in it too that I need to extract out. I'll work on that and try to get it into the fortestnet branch on my github ( Another option that we've been using so far is to do the line below on a server that is not being restarted. It's slow, and it uses a fair amount of network bandwidth (which can actually be a good thing for testing), and it mostly only works if the restarted node and the broadcasting node are connected. for line in `cli getrawmempool | sed -e 's/[{,"}]//g'`; do cli sendrawtransaction `cli getrawtransaction $line`; done 5. I need to hard-code the fortestnet branch to only run on testnet to make sure that people don't accidentally run it on mainnet. There are a few things in that branch that I think are not really safe enough for main. 6. A couple of people (bitsko and rromanchuk) are working on bringing SPV wallets into the testing rounds as well. SPV wallets are expected to break during a hard fork. It is informative to document how exactly they break and how hard they break. We would like to have SPV wallets notify the user when a probable hard fork is occurring so that users don't unwittingly act on incorrect information. Users of SPV wallets need to be told to sit back and not interact during a hard fork, or to switch to a fully verifying wallet if needed. 7. Memory usage and crashing: see comment below. 8. **[Command-line aliases](** (rebroadcast fixed) 9. [Node IP list]( 10. [Block explorer]( -- crashes somewhat often due to constrained RAM; inform /u/rromanchuk if it goes down. 11. Bandwidth logging: mkdir ~/logs sudo apt-get install tcpstat giface=eth0 # unless it's not sudo tcpstat -f "port 18333" -o "%s,\t%B\n" -i $giface 0.1 -F | tee -a ~/logs/bw.log | nc 5005

posted by /u/jtoomim in /r/bitcoinxt on November 11, 2015 09:24:23

We turned off the hashpower on the BIP101 fork at the end of our testing today. After I went to sleep, it seems someone started hashing on the legacy fork, and the work on that chain overtook the BIP101 fork. These are from an XT node that I know was following the big-block fork, and has several 9 MB blocks in its database: ./bitcoin-cli getblock 000000003199e2651d08bb2282d0896128d04ac3bf82f344d3ab92bf0061c80b | grep height "height" : 585471 ./bitcoin-cli getblockhash 585471 0000000000324544abe2531548faec6525a856999f3b46ecf9128a3f5b273d24 Those are not the same hash. Not the same block. The first block was orphaned at height 585471, but the second block is currently confirmed. Here's another interesting block for the records: ./bitcoin-cli getblock 0000000015cbe557995bf95861d66231d01513ba99f06d23854522272c9049c3 | less "hash" : "0000000015cbe557995bf95861d66231d01513ba99f06d23854522272c9049c3", "confirmations" : -1, "size" : 9115445, "height" : 586921, "version" : 536870919, "merkleroot" : "6fcc61f29be259c74bc95e8f57c249678572b82b343b6b2bb62897ce1cfd7283", tx: [ .... (9.1 MB of tx go here) ... ] "time" : 1447195582, "nonce" : 3149336320, "bits" : "1c276440", "difficulty" : 6.49874805, "chainwork" : "0000000000000000000000000000000000000000000000067d572257e0d8ef4b", "previousblockhash" : "0000000020e32b6f8f80950b3bb888294a8ece8e61b6ee4b0fed6800d0a44209" } That's from a sequence of (IIRC) about five 9.1 MB blocks that we mined over the course of 7 minutes. It's since been reorged out by an empty Legacy block: ./bitcoin-cli getblock `./bitcoin-cli getblockhash 586921` { "hash" : "00000000004572422af0fd54ac158ff430bf60003c54db27eb2e953d4ae2aa4c", "confirmations" : 14985, "size" : 262, "height" : 586921, "version" : 4, "merkleroot" : "857b0723d7999341cd0b9ff9fa0e112dbbb19535dc32c7f06d3721a528c0021a", "tx" : [ "857b0723d7999341cd0b9ff9fa0e112dbbb19535dc32c7f06d3721a528c0021a" ], "time" : 1447140611, "nonce" : 2663112535, "bits" : "1c24a880", "difficulty" : 6.98332357, "chainwork" : "00000000000000000000000000000000000000000000000677759d3b1ee4737d", "previousblockhash" : "00000000007947bac4fb9dbc759704958bc38c2050f7cd236c2a7e0b8c859958", "nextblockhash" : "000000000048c36f0ed2bb68145ea4376ed4facfbbc020726204162f95692387" } The second block was mined 54971 seconds or 15.27 hours later. So far, the testnet chain has forked twice when it should have, and it has reorged twice when it should have. The first reorg was caused by us switching to Core to overtake the BIP101 branch. The second reorg was caused by us switching off our hashpower, and someone else overtaking the BIP101 branch. So far I have not found any behavior that is incorrect. **Edit Nov 12th:** We're currently focusing on developing tools to collect data from subsequent tests right now. We should be able to fork pretty much whenever we want as long as we leave the hashrate off when we're not using it. If we leave the hashrate on, it's (a) expensive, and (b) harder to test the forking behavior since we'd have to hash on Core to get it to catch up first.

posted by /u/jtoomim in /r/bitcoinxt on November 11, 2015 06:34:32 Not exactly light reading. Summaries: On the first day, we fork, then we unforked (reorged and merged back with Core), then we forked again. All went well. The reorg crashed a lot of block explorers, but the actual nodes seemed to come out just fine. --- That night, Lightsword and I talked about stuff for a while. You all will probably find this boring. Nothing happened here. --- On the second day, Gavin, DarthAndroid, I, sega01, and a couple others did some work with larger blocks. We found out that the current blocksize limit on BIP101 testnet3 was 9,116,806 bytes. We made several blocks that size. We struggled with the logistics, and I had a lot of trouble trying to individually manage 7 VPSs via ssh at the same time. Eventually I mostly gave up on that and focused on three or four. We need to develop better tools. For the next few days, I think we'll mostly be focusing on tooling. Performance was erratic. In some cases, a 9 MB block seemed to hit the 5 VPSs that we looked at within 2 seconds of each other. In other cases, we saw spreads of about 30 seconds. I think that timestamp miscalibration may have been part of the issue. Another part of the issue was that we didn't always know who was mining the blocks or where they were coming from. The main issue was probably just the fact that some of our nodes were connected to 6 Core nodes and only 2 XT nodes. Mining was much faster than in mainnet, with 1 block per minute in many cases. In one case, we had 100 blocks per second. Twas interesting times. Testnet is weird.

posted by /u/jtoomim in /r/bitcoinxt on November 10, 2015 21:24:55

I've seen a few posts by small-block supporters (e.g. [luke-jr](, [110101002]( claiming that 4 MB is larger than the network could handle securely today. So far, we've been making theoretical arguments back and forth with no success. Ultimately, it's an empirical question, not a theoretical one. Perhaps we should just implement an 8 MB network and see how it goes? Here's what I propose: We set up a testnet-style altcoin that can be merge-mined with Bitcoin Core. The new coin will be almost identical to bitcoin's testnet, with a few differences: 1. 8 MB blocks will be permitted immediately. 2. 16 MB blocks will be permitted in 2017 (1 year before BIP101). 3. 32 MB blocks will be permitted in 2018, and the permitted blocksize schedule will be 2x what is permitted by BIP101 then and thereafter. 4. Block rewards increase over time, doubling every 4 years instead of halving: let nobody mistake this for an economic challenge to Bitcoin. 5. If possible, merged mining support would help a lot with testing the mining infrastructure. I know merged mining would require a fair amount of code to be written (or copied from Namecoin), but I think it would be worth it. It would be especially helpful if the merged mining could be done using two different servers, so that the main bitcoind mining process could be isolated from resource contention in the 8 MB process to whatever extent the pool or mining farm operator desires. The new coin would need a name. Please make suggestions below. Here are three probably bad ideas: 1. POCcoin, for proof-of-concept 2. BIPcoin, for BIP101 3. Eightcoin As a medium-size miner, I would be willing to support the testing of this project with a dedicated server, about 400 TH/s of merged mining, and possibly a dedicated 100 Mbps fiber line. If we can convince a few of the other major pools (especially in China) to put up a server for this project, we can set up a test network to see what limitations there are when trying to scale to 8 MB blocks. If we can fix all of those issues (or see that they are easily fixed by the financial incentives of an actual mining network), then it should be easier to convince people to go for BIP101. In order to make this happen, we'd need at least one good programmer behind it. Any volunteers? Any reason why this would be a bad idea?

posted by /u/jtoomim in /r/bitcoinxt on September 8, 2015 03:18:47

> But if I were pro-ABC, I wouldn't be so happy. I don't get any benefit from you using my node. I charge 0% fees. I maintain it for my own use, and it's no extra effort for me to make it available to the rest of the world. P2pool's design was intended for every substantial miner to run their own node locally on their own LAN.

Commented by /u/jtoomim in /r/btc on August 10, 2020 10:29:12

> And I'm referring to the fact that Jtoomim has signaled his political opinion in his P2Pool node blocks. I treat the coinbase message as a billboard. It's an advertisement, and an irrefutable message from miner to miner, not a vote.

Commented by /u/jtoomim in /r/btc on August 10, 2020 09:11:48

> Does stratum2 also solve this? If you're using standard channels, no. If you're using extended channels, yes. StratumV2 is a complicated beast. I don't think it's going to get widespread adoption of extended channels.

Commented by /u/jtoomim in /r/btc on August 10, 2020 08:10:08

No, that is not it. The issue is that miners are not pools. If a pool signals BCHN, then it loses any pro-ABC customers. If a pool signals ABC, then it loses any pro-BCHN customers.

Commented by /u/jtoomim in /r/btc on August 10, 2020 01:24:19

Hashrate has gone up more, and we had the halvings in May. Around 3.5¢ right now seems to be the minimum for making a profit with western operating expenses.

Commented by /u/jtoomim in /r/btc on August 10, 2020 01:00:59

It's not quite good enough for mining in this market. Especially with USA labor costs and the import duties. We used to pay 2.8¢, but then our utility company (PUD) decided to jack rates specifically on cryptocurrency customers, and then about 70% of the miners in our county stopped mining.

Commented by /u/jtoomim in /r/btc on August 10, 2020 00:53:56

I changed my coinbase text string a while ago. I think it makes our intent fairly clear: We don't have much hashrate online, though. Our power company increased our electricity rates to 4.3¢/kWh, and we have suspended most of our mining while we renegotiate.

Commented by /u/jtoomim in /r/btc on August 9, 2020 22:42:14

Or a little less. It was: | Date | Price | |------|------| Sep 13|$438 Sep 20|$420 Sep 30|$555 Oct 11|$450 Oct 20|$444 Nov 1|$401 Nov 5|$554 Nov 10|$550 Nov 14|$432 So we're both right. It's hard to say exactly what time point is the best. BCH had been on a slow decline since Jan 2018, when the price was around $2400. If you go much farther back, the data end up being determined more by your place on that slope than anything fork-related.

Commented by /u/jtoomim in /r/btc on August 9, 2020 13:52:59

> you call their actions a "defense". I would say that's an attack against an attack. It would be akin to a police action. It is the use of force, but it is a justified use.

Commented by /u/jtoomim in /r/btc on August 9, 2020 13:40:06

That foundation needs to be established and proven long before we can even consider providing it with ~$10m in funding via an IFP. Otherwise, it's too likely to be corrupt.

Commented by /u/jtoomim in /r/btc on August 9, 2020 13:37:59

Let me know what signaling method you decide upon. is currently mining with BCHN, though not with very much hashrate (our electricity rates increased, so we shut stuff off).

Commented by /u/jtoomim in /r/btc on August 9, 2020 06:47:04

Haipo is starting Bitcoin Cat instead of sticking with BCH.

Commented by /u/jtoomim in /r/btc on August 9, 2020 06:24:34

> Leaving cannot constitute a threat, unless you are reasoning from some extremely authoritarian priors. Tell that to the Chinese and the investors, who are in a panic about this. They feel intimidated. They feel like their livelihood is at stake. I agree, forking should be allowed. However, people are afraid of the fork, and that fear is being exploited for political gain. This is especially true in China. > Someone not accepting blocks does no involve force, again, except if you are reasoning from extremely authoritarian priors. Tell that to Satoshi. He's the one who coined the term "attack" for this scenario.

Commented by /u/jtoomim in /r/btc on August 9, 2020 06:14:47

Bitcoin Unlimited has been a viable implementation since day 1. Classic and XT were also. This is just marketing nonsense. >This allows Bitcoin ABC to make this much needed improvement while miners who may prefer other rules are free to choose a viable, alternate implementation This is just ABC justifying forking off because they can leave BCH mining to non-ABC nodes.

Commented by /u/jtoomim in /r/btc on August 9, 2020 06:10:51

> You were all fine with this hash reduction tax trick when Roger was backing it. I supported it initially because it was an honest and open request. I revoked my support when it became clear that it was divisive. Now it's a demand, backed by a threat. I don't take kindly to demands or threats.

Commented by /u/jtoomim in /r/btc on August 9, 2020 04:36:28

> Define public debate? [Ask ABC]( > While some may prefer that Bitcoin ABC did not implement this improvement, this announcement is not an invitation for debate. The decision has been made and will be activated at the November upgrade. ... > The Coinbase Rule improvement is as follows: All newly mined blocks must contain an output assigning 8% of the newly mined coins to a specified address.

Commented by /u/jtoomim in /r/btc on August 9, 2020 04:32:27

>Coercion: the practice of persuading someone to do something by using force or threats. "If less than 51% of blocks support the IFP, there will be a split." -- That is a threat. "If you mine blocks that do not support the IFP, we will try to orphan your blocks." -- That is the use of force.

Commented by /u/jtoomim in /r/btc on August 9, 2020 04:24:09

> continually and publicly crucified Amaury to answer questions / address accusations on the spot We wouldn't continually and publicly demand answers if he actually gave answers that made sense.

Commented by /u/jtoomim in /r/btc on August 8, 2020 22:44:34

> if they have a majority of the hashrate ... your block is orphaned That means it's a 51% attack.

Commented by /u/jtoomim in /r/btc on August 8, 2020 22:43:18

> Does it bother you that me simply asking that question already gave me 4 downvotes? Not really. I think it was a question that you should have not asked publicly. You were asking me for a promise to put in effort for (currently) zero pay. That is putting me on the spot, and puts me in the difficult position of having to either answer truthfully (as i did -- I explained what I'm certain that I'm interested in doing for free ) or dishonestly (to promise the world in order to achieve political gain). It was not a good question to ask of someone in such a politicized and uncertain context. > We're about to kick ABC out and have no long term committed devs, I've been a Bitcoin dev since 2015. Freetrader has been around since at least 2016 (I don't know his/her full history). Tom Harding and Zander, Dagurval, and others have been around far longer than that. What's worth more: a promise, or a track record? Amaury Sechet was working at Facebook until 2017.

Commented by /u/jtoomim in /r/btc on August 8, 2020 15:59:38

It's a UASF. If you disobey it, and if they have a majority of the hashrate, and if there's no 10-block finalization rule, then your block is orphaned. Basically, it's like Segwit all over again. Except now Amaury's doing it instead of defending against it.

Commented by /u/jtoomim in /r/btc on August 8, 2020 15:48:17

At least enough to get block propagation capacity to be able to handle 1 GB blocks. I want to do that for fun, as long as I have a community that will be supportive of me in that endeavor. (Note: there are other bottlenecks in the code that will likely become significant around the 250-500 MB/10 min range, so 1 GB is my current near-future target for everything, just for the block propagation subsystem.

Commented by /u/jtoomim in /r/btc on August 8, 2020 15:47:47

> and possibly to another man (freetrader). No, that will not happen. Not again. We won't let it. People can learn.

Commented by /u/jtoomim in /r/btc on August 8, 2020 15:44:36

Just as a point of reference: BCH+BSV price is currently above pre-fork BCH. We're at around $530 now, whereas we were at $450 before.

Commented by /u/jtoomim in /r/btc on August 8, 2020 15:43:12

You're welcome, Greg.

Commented by /u/jtoomim in /r/btc on August 8, 2020 15:42:20

In China, maybe. It will be important to get ahead of them on this.

Commented by /u/jtoomim in /r/btc on August 8, 2020 15:39:00

I am a a volunteer for I am willing to be vaccinated and then intentionally exposed to SARS-CoV-2 to test the effectiveness and safety of the vaccine.

Commented by /u/jtoomim in /r/btc on August 8, 2020 11:55:50

> I remember a reddit argument with you advocating involuntarily penetration of peoples bodies for the public good (re vaccination) yet now you talk about "coercion and theft". You are mischaracterizing my position. What I argued for was preventing unvaccinated children from being admitted to public schools, as they are a physical threat to others.

Commented by /u/jtoomim in /r/btc on August 8, 2020 11:44:49

> I'm not sure if there are enough whales that care ... about Bitcoin ABC. It's an untested hypothesis whether there's a relationship between how happy people are with Bitcoin ABC's leadership and how much funding they get. I think this hypothesis is likely to be correct, and worth testing by creating a different organization.

Commented by /u/jtoomim in /r/btc on August 8, 2020 11:40:45

> I am purposely aggravating at times in order to tease forth motivations nothing more. I am having fun here as an observer watching history unfold. Okay, I mistook the intent to aggravate people with an emotional outburst. I was trying to assume good faith, and was assuming that your goal was to discuss things with civility and with an open mind. It seems that I misjudged you. That doesn't make you a shill, of course. But [being intentionally aggravating is trolling]( We will treat you accordingly. /u/wisequote /u/chainxor

Commented by /u/jtoomim in /r/btc on August 8, 2020 11:36:26

P2pool nodes can edit the coinbase text however they want using a command-line switch. A few do. Many of the smaller ones are lazy or unaware of the option.

Commented by /u/jtoomim in /r/btc on August 8, 2020 11:31:07

It's not much code.

Commented by /u/jtoomim in /r/btc on August 8, 2020 00:36:10

A brief history of drift on BCH and BTC, and a list of the reasonable choices for reference blocks for a drift-correcting DAA like Grasberg, as illustrated in 3D:

Commented by /u/jtoomim in /r/btc on August 8, 2020 00:25:09

> (What was that shit about Amaury singing during the break, dude? It was a reference to something that happened before the meeting started. It was a joke in bad taste, and I'm sorry for it. > Who the fuck is Josh Green? Seemed like he just popped in to stir Toomim’s shit for him. He's the lead/only dev of Bitcoin Verde. I agree, he was out of line, and I told him so after the meeting. He apologized to David afterwards (before I told him that I thought he was out of line). > /u/JToomim came off like an insufferable, know-it-all dick I'm noticing that you're focusing entirely on personalities and politics, and not addressing technological arguments or economic principle arguments. Yes, there was conflict. But it was usually over *substance*, not people. Focusing on people being angry is like saying that it doesn't matter if their anger is justified, and the only thing that matters is who loses their temper first.

Commented by /u/jtoomim in /r/btc on August 8, 2020 00:06:34

Yes, we valued the security and privacy of our customers, and only accepted visits from customers themselves. Nothing personal; it was just our interpretation of our fiduciary duty to those who had entrusted us with their money-printing machines.

Commented by /u/jtoomim in /r/btc on August 7, 2020 21:08:37

Voluntary funding isn't working ... ... ... for Bitcoin ABC.

Commented by /u/jtoomim in /r/btc on August 7, 2020 20:01:05

> q6mo hard fork schedule Amaury was the main person insisting on that. > conflict and one-ups man-ship is inevitable Yes, we need to build a system that can help defuse and work through that kind of conflict. I'm actively working on it, and I expect to have the first bits of that work out within a few days. Haipo Yang is working on something with a similar goal, but a different (and possibly complementary) strategy. Bitcoin Cat seems to be his idea for how to test out his system. I'm in his Telegram chat, so the two systems might end up merging.

Commented by /u/jtoomim in /r/Bitcoincash on August 7, 2020 14:08:23

I think that is most likely. In the USA at least, non-profits (501(c)3) come with a ton of extra paperwork.

Commented by /u/jtoomim in /r/btc on August 7, 2020 14:06:06

Signaling is by block. DDoS is by IP address. It's not connected. The only security reason to not signal is if they are afraid of their blocks being *orphaned* in an attack. The actual reason why signaling is not happening is because *miners are not pools.* Pool incentives discourage signaling during controversies. * If a pool does not signal, it will keep most of its miners. * If a pool signals, it will lose whichever miners disagree with the pool's signaling.

Commented by /u/jtoomim in /r/btc on August 7, 2020 13:16:05

The Bitcoin Classic and XT signaling were done via the block version field. This generally *is* primarily under full node control, or at least it used to be until AsicBoost. The BCHN signaling is via the coinbase text field, which is entirely under pool control, and which the node software cannot touch.

Commented by /u/jtoomim in /r/btc on August 7, 2020 13:05:59

> unless Jonathan made even more 3D graphs [I have](, actually. Most of them were never uploaded onto my server and are only on my local machine, though. > showed a different one in the meeting). No, you linked to the one I showed. The key though is in the interpretation. The graph itself is hard to read, because no labels etc.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:57:58

> I don't understand why a BCHN miner wouldn't signal their support. Miners can't signal directly. Only pools can signal. A pool who doesn't signal keeps all of their hashrate. A pool who signals loses whatever part of their hashrate that disagrees with the signal. Also, with some pool software, getting it to signal is a huge pain in the ass. Sometimes it requires editing the source code and recompiling.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:53:48

> They are small hashers, hashing at some pool They are not small hashers. They *are* hashing at some pool, or at least most of them are. They can switch pools if they are unhappy with how their pool will respond to the fork, and they can also run their own pool software if they need/want to. Signaling is up to the pool. Pools who don't signal can accept miners with either political side. As soon as a pool signals one way or the other, they lose miners who disagree with them.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:48:30

It has only been a day. It usually takes more than 1 day for miners to change (for me, usually a month or two, not because it's hard, but because I'm usually busy with other things.) Also, we're looking at a 7 day average chart.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:44:21

It is also possible for a pool running ABC to signal BCHN falsely. The signaling concept is just inherently voluntary and unprovable.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:43:12

2% are signaling for BCHN. 0% are signaling for ABC. 98% are not signaling, and are using their 100 bytes of coinbase text for other purposes, like merged mining and other metadata.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:34:15

> mining nodes = miners = blocks.. no? No, that is not how it works. Satoshi never foresaw the existence of pooled mining, so his "mining nodes" term as a way to count hashrate is simply inaccurate in a modern context. A block is a block. Blocks do not have any inherent property that makes it possible to determine who mined them. We just can't know what blocks are mined with which client unless the miner voluntarily announces that fact to the world. 98% of blocks are mined by miners who do not do that. > I assume that 98% is using ABC I do not. I know for a fact that a lot more is using BCHN than that. I don't know how much, but it's definitely double-digit percents by now, and possibly over 50% already.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:27:03

This was never about drift correction. It was about grift correction.

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:22:05

What makes you think that Bitcoin ABC is a non-profit organization?

Commented by /u/jtoomim in /r/btc on August 7, 2020 12:20:27

Because I believe that we can fix the repeated turmoil once we fix the Amaury issue. Amaury has been a consistent factor in all of the hard fork upgrade drama that we've had in all of BCH's history.

Commented by /u/jtoomim in /r/Bitcoincash on August 7, 2020 12:18:52

Why bother with a tax? Next time, the developer can just add UTXOs directly.

Commented by /u/jtoomim in /r/Bitcoincash on August 7, 2020 08:00:35

I've still got some ways to go before I make it to 4d chess, though.

Commented by /u/jtoomim in /r/btc on August 7, 2020 05:56:15

> this pledge is not an invitation for debate omg lol

Commented by /u/jtoomim in /r/btc on August 7, 2020 05:35:55

Accountability and transparency are important for any operation that claims to provide a public good. That's basically everything that flipstarter aims to fund; otherwise, there would be no need for the assurance contracts.

Commented by /u/jtoomim in /r/btc on August 7, 2020 05:33:39

There's a 3D graph in the video. The video is worth watching if only for that.

Commented by /u/jtoomim in /r/btc on August 7, 2020 05:31:20

Some of the tax will go as a kickback to the miners who activate it.

Commented by /u/jtoomim in /r/btc on August 7, 2020 04:17:14

That's some nice blockchain unity you've got there. It would be a shame if something were to split it.

Commented by /u/jtoomim in /r/btc on August 7, 2020 04:16:26

No, some of that 8% will end up in the wallets of the miners that defend it.

Commented by /u/jtoomim in /r/btc on August 7, 2020 04:15:30

Accountability is also one of the features integrity can provide. We could simply choose not to fund projects that choose not to maintain transparent accounts. It seems to me like that is a reasonable minimum bar for any funding request.

Commented by /u/jtoomim in /r/btc on August 7, 2020 04:09:22

It turns out it was a metaphor. He was just spending a lot of time on walks with his dog. He's back now, and is working on the video.

Commented by /u/jtoomim in /r/btc on August 7, 2020 02:14:23

>Shills petting shills, the bunch of you. And > So were nazi soldiers in nazi Germany. and > You're just a tribalistic idiot. All of you: chill out and calm down. Be civil in your disagreements. This constant shill accusation bullshit reminds me of the Japanese internment camps during WWII. Don't presume people are guilty of working for your greatest enemy just because they share some opinions with the people you hate, /u/wisequote and /u/chainxor. > I had an in-depth discussion directly with u/jtoomim And rein in that temper of yours, /u/curryandrice. You had an in-depth conversation conversation with me in which you appeared to lose your temper as well. Both of you: assume good faith when you can. If you can't assume good faith, downvote and move on.

Commented by /u/jtoomim in /r/btc on August 6, 2020 19:54:02

No, undoing that is free. We just have to wait for a few more halvings, and it will go away on its own.

Commented by /u/jtoomim in /r/btc on August 6, 2020 06:35:00

It's not paranoia if everyone legitimately hates me!

Commented by /u/jtoomim in /r/btc on August 6, 2020 06:21:04

For subsequent meetings, I'm interested in trying to get someone from outside the BCH community to moderate. Like maybe someone from ETH or Monero or something. Doesn't really matter where they come from, just as long as they don't care about BCH.

Commented by /u/jtoomim in /r/btc on August 6, 2020 03:06:01

Not exactly. Exchanges want volume. Splits can cause a short-term increase in volume, yes, but can cause long-term reductions in volume. Splits also force exchanges to do a lot of extra work, especially if there's no replay protection. I think exchanges are kinda meh on it overall, but definitely against unprotected splits.

Commented by /u/jtoomim in /r/btc on August 6, 2020 00:25:26

Didn't Roger come out against Grasberg in one of his speeches at the webconf over the weekend?

Commented by /u/jtoomim in /r/btc on August 6, 2020 00:24:09

> Shouldn't that happen before it's on the roadmap? Roadmaps should clearly specify whether something is a *research project* that will be formally proposed once more data are available, or whether something is expected to be implemented, and which everyone should have agreed to the principles of.

Commented by /u/jtoomim in /r/btc on August 6, 2020 00:21:56

> That Avalanche preconsensus thing is pretty shady. I reserve judgment on it until I see the full spec. I do not have a lot of optimism for it, and I think Amaury could have been spending his time a lot more productively working on something else. The reddit discussions have left ... unanswered questions. There are many other ways to achieve safe fast transactions than Avalanche, and I suspect that Amaury may have gotten target fixation with Avalanche. Oh well.

Commented by /u/jtoomim in /r/btc on August 6, 2020 00:20:08

> I would also support throttling back the hard fork frequency to allow for greater and less rushed consensus-making, I believe that scheduled hard forks should remain, at least for the next couple years. I think that 6 months is too frequent right now. I personally prefer something around 9-12 months. (9 months is already 50% more time. Also, since the deployment phase will still be about 3 months, a 9-month schedule means 100% more time to research, develop, pre-test, and debate. So we might not even need to go to 12.) I think hard forks should become less frequent over time as BCH gets more mature.

Commented by /u/jtoomim in /r/btc on August 6, 2020 00:15:43

> always and forever We can't take this for granted. This kind of stuff is really difficult to pull off. The price of economic freedom is eternal vigilance. But yeah, take note: We are vigilant.

Commented by /u/jtoomim in /r/btc on August 6, 2020 00:13:43

It's been here for 13 days. I sent them that link to no response. Their autolinter was broken when I first tried to `arc diff` it to send it to, which was the main reason why it ended up staying on my github and not making it to After that autolinter issue was fixed, their autolinter still got tripped up on the printf statements we had in the unit tests to aid in debugging, and there were a few other autolinting issues. I ended up spending about 3 hours in total trying to get ABC's autolinter and the rest of their submission process working (most of it was just tearing out and reinstalling clang-format-8 a few times), even though it only took 2 hours to port the code over to the ABC codebase. Also, I got tired of putting in effort when ABC wasn't putting in any effort at all on aserti3-2d, so I decided to not invest any more time in it until I saw about half an hour or more worth of effort from ABC at collaboration on aserti3-2d. I never saw that effort, so the version of Bitcoin ABC with aserti3 remains hosted on my github instead of their preferred phabricator system.

Commented by /u/jtoomim in /r/btc on August 5, 2020 23:10:42

Yep. He was a good, neutral moderator.

Commented by /u/jtoomim in /r/btc on August 5, 2020 15:24:27

Here are some self-serving links to my own CV. (I compiled my crypto CV recently for someone who asked, and some of those things are relevant to your question.)

Commented by /u/jtoomim in /r/btc on August 5, 2020 07:02:28

> The next couple years should be critical for BCH. I would prefer we not just coast along without governance for a while... It shouldn't take years to get some form of better governance in place. I'm thinking more like 6 months. I think we'll have a prototype in place before the next hard fork feature freeze date (or have a system for choosing whether we even want to continue with that cycle -- that decision itself was one of Amaury's royal decrees), and we can test it out then and see if it's working or not. > you offering to become dictator if someone else starts to does make me feel a little better They wanted to make George Washington into a king. The reason why the USA is not a monarchy is because he stepped down voluntarily after 8 years. I'm not here because I want to seize power. With great power comes great responsibility, and that seems like a bit of a drag overall. I'd rather just jump in and take care of the problems that nobody else has figured out how to solve, and let the day-to-day stuff be handled by other people. Designing a system for good governance will probably be one of those problems that I'll need/want to help with. > You thinking about a better process sounds great, but, time will tell if it is a dreamy unrealistic hope (for both of us). I had a good talk with Travis Kriplean (who made for us in early 2016), and it sounds like we should have an alpha v0.1 for BCH pretty soon. It won't be a binding vote, but it should help us visualize opinions a lot better and improve the signal while cutting down the noise. We also are both getting excited about the idea of liquid coinocracy. I think it could be a good basis for a broader and more efficient system for group decision making. It might even be the optimal hive-mind algorithm for human civilization and society -- I think might make a video or article on the general concept and its potential soonish. It will take quite a bit of work to get it implemented, though, and since he now has a child he's primary caregiver for, he can't work on it very fast himself. But if we can find another web dev to help him, whom Travis can guide and advise, we should be able to get something going. And maybe I'll have some time in between or after Blocktorrent to work on it myself.

Commented by /u/jtoomim in /r/btc on August 5, 2020 06:57:22

Calling it a governance model is not justified. He's just saying that we need more signaling. That includes users, miners, businesses, everything. Everything gets easier when we have more data.

Commented by /u/jtoomim in /r/btc on August 5, 2020 06:52:43

> I’m starting to think that Amaury is stoking this conflict on purpose. If that's true, then the drama will be over soon. Have some hope, we're almost there.

Commented by /u/jtoomim in /r/btc on August 5, 2020 05:28:53

There have been a lot of people who, last month, had a certain expectation of what was going to be happening right now, and they have had that expectation shattered. Now it's time to start picking up the pieces and building more reasonable expectations.

Commented by /u/jtoomim in /r/btc on August 5, 2020 05:06:07

Good question. I don't know. Probably? I hope so. I think so. My opinion is that we should not let Bitcoin ABC affect BCH decisions any longer. Any influence we allow them to have is power that they have over us. We should try to collect as much of the community as we can, then move on and buidl. What they do from here on out is up to them. I think it's unlikely that ABC will do a no-change fork. They'll either go ahead with Grasberg and try to get as much out of it as they can (possibly just some exit strategy), or (more likely) they will adopt aserti3 and try to regain power in BCH. I don't see any profit for them for keeping cw-144.

Commented by /u/jtoomim in /r/btc on August 5, 2020 03:07:26

David Allen (the host) posted an image in the [WG] Difficulty Adjustment group on Telegram saying "GONE FISHING -- Be back someday". The meeting was rough for him. He'll need some time.

Commented by /u/jtoomim in /r/btc on August 5, 2020 03:05:09

Freetrader (BCHN's lead dev) personally wrote most of the aserti3 spec (he got around to starting it before I did -- I was more occupied with the science, like [measuring the error margin]( for integer approximation methods to make sure I didn't miss anything), and has been doing more edits to the actual implementation to the prototype aserti3-2d implementation based on the BCHN codebase over the last 2 weeks than I have, so I'd say there's, you know, a chance. From freetrader's gitlab repo: Here's the merge request for adding the aserti3 spec to BCHN's repository: In the spec MR, you'll notice a lot of prominent full node and BCH developers are making comments, like sickpig (BU), BigBlockIfTrue (BCHN), freetrader (BCHN lead), Jochen Hoenicke (Johoe's mempool), and Tom Zander (Flowee).

Commented by /u/jtoomim in /r/btc on August 5, 2020 01:25:44

No, he's not. He's a long-time member of this community, and one of BCH's more technically competent individuals. He used to do Avalanche stuff for BCH, but he left BCH to join AVA Labs.

Commented by /u/jtoomim in /r/btc on August 5, 2020 01:06:05

> There absolutely will be a split, at the very least between ABC and Knuth nodes. This is not 100% certain. At this point, I think the best choice for ABC is actually to just cave in and implement aserti3. They really don't have a leg to stand on otherwise. They don't have much of a leg to stand on if they do, either. So who knows.

Commented by /u/jtoomim in /r/btc on August 5, 2020 01:02:26

Knuth decided the same thing as everybody else did except (so far) ABC. Everyone is going with aserti3.

Commented by /u/jtoomim in /r/btc on August 5, 2020 01:01:19

You forgot BCHD, Bitcoin Verde, and Flowee. If you look at Tom Zander's post history, there's no ambiguity about which way Flowee is going. Chris Pacia and Josh Ellithorpe have definitely not been shy about their opinions in the video meetings either. And if you saw the most recent meeting, Josh Green chewed out Amaury pretty harshly, and was unambiguous about his position. In terms of full node teams and their decisions, the debate is over. The only unknown right now is what ABC is going to do; everyone else is committing to aserti3.

Commented by /u/jtoomim in /r/btc on August 5, 2020 00:59:43

The irony is that if this were Ethereum, I would have proposed WTEMA instead of ASERT. (Ethereum requires block timestamps to be monotonic, which eliminates the main vulnerability of WTEMA. WTEMA does not perform any better than ASERT, but except for the aforementioned vulnerability, it's absurdly simple to implement, and otherwise nearly equivalent.)

Commented by /u/jtoomim in /r/btc on August 4, 2020 23:41:01

No, I was just referring my own work and my own investment. I would literally rather send my BCH stash to a burn address than allow BCH to become corrupt while I sat back and watched. Fortunately, burning things to the ground is not necessary. We have a much better option. Since basically all of the BCH dev community is in agreement on this, we'll just build something better without Amaury.

Commented by /u/jtoomim in /r/btc on August 4, 2020 21:18:31

> From the telegram discussions, it seems this was not proposed at all. It was discussed in the 3rd DAA video meeting on Monday, right after I published my article. That meeting has not been published yet, and might take some time. David kinda got overwhelmed by what happened, and needs some time off. > And also, you seem to say drift-correction is a hard-no for you now? Yes, my opinion changed while I was writing my latest article. It opened my eyes to how big of a problem it can be if we allow this kind of change to happen except for *extremely* solid reasons with very widespread community support. Pretty much the only reason I can think of to support changes to the issuance schedule or money supply is if we are facing dangerously low hashrate security and 51% attack threats, like if we're not able to get enough fee revenue to keep BCH running in 12+ years. There could be something else, but it has to be very specific and well defined, and can't be something silly and aesthetic like "makes it easier to predict the timestamp for a future block knowing nothing except its height," or "because the issuance schedule should be defined based on the genesis block." And the idea that we should do anything with the money supply solely because BCH's dictator asked for it is an epic `hell no` for me. I'd rather burn everything that I've worked on to the ground, ragequit, and walk away than allow that much room for corruption.

Commented by /u/jtoomim in /r/btc on August 4, 2020 20:43:52

> I didn’t know that the stonewalling by ABC was this bad. This is really infuriating. It really is. The reason why there seems to be an "anti-ABC mob" is because a lot of us who have been paying close attention have noticed a pattern of subtle but heavy manipulation, sabotage, and power games from Amaury (and consequently, from ABC as a whole). But because he's usually pretty good at keeping his manipulation subtle, it only gets noticed by the devs and the people that he's attacking or manipulating. This makes them seem like they're crazy, and so they usually get sidelined. Amaury comes out of each conflict as the hero and the victor, and his credibility goes up whereas his enemies go down. Fortunately for us, over years of this, these marginalized devs haven't completely disappeared from BCH. They've just found side niches to work in productively. And since I was able to pretty convincingly catch Amaury in the act of his manipulation this time, and show that he had no technical or philosophical legs to stand on when opposing aserti3 and preferring Grasberg, and must have only had *political* motivations, we finally have been able to get some momentum in evicting him for all of the harm he's done.

Commented by /u/jtoomim in /r/btc on August 4, 2020 18:49:28

David Allen of Future of Bitcoin Cash has been hosting dev meetings for years. I was asked by a few people to join the video meetings to talk about DAA stuff. I did, and that was one of the few places where we were able to get any responses from Amaury about his justifications for historical drift correction. (I think he probably would have included justifications in text if he had any good ones, and only gave his best attempt during the meetings because he was pressured to by direct questions that he could not ignore.)

Commented by /u/jtoomim in /r/btc on August 4, 2020 18:43:34

> If all DAAs have indeed ignored historical drift and marched forward, then would it be a good thing to keep doing the same or stabilize? Yes, I believe so. It's desirable to have DAAs that do not add (or subtract) new drift. The 1.7 sec/block of drift that the current DAA subtracts is quite good, and well within the acceptable range. ASERT will likely be similar. No new drift will be added (or at least, minimal amounts), and the current schedule and trajectory will be maintained. Meanwhile, the oscillation issue that has plagued BCH for the last few years and caused long average confirmation times and unfair mining incentives will be fixed. The current DAA's problem is oscillations, not drift. That was the original reason for changing the DAA, and that was what I designed ASERT to do. Amaury tacked on historical drift correction as a goal to his DAA proposal and tried to hijack the upgrade to solve his own personal goals, but his attempt was controversial, unnecessary, and unpopular, so BCHers shut it down.

Commented by /u/jtoomim in /r/btc on August 4, 2020 14:48:25

> COMMUNICATION > As a general rule, communicating with stakeholders is better than not communicating with stakeholders. I agree. > BSV and BCH are currently priced comparably. Personally, I think that's because BCH development has stagnated as the result of Amaury's stonewalling. We should be way ahead, and we were for a while, but the progress we should have made was not made. CTOR was added two years ago. Bitcoin ABC has not added any features that leverage CTOR. This was Amaury's responsibility to do (since he's the one who proposed it), and he just didn't bother to do anything about it.

Commented by /u/jtoomim in /r/btc on August 4, 2020 14:40:11

> that was affected by the current DAA The current DAA has not distorted the timeline at all. The current DAA stays very close to 600 seconds per block. Since Nov 13, 2017, it has averaged 601.7 seconds per block. You might be thinking of the EDA, which lasted from Aug 1, 2017 until Nov 13, 2017. That added about 1,700 hours of drift over in 3 months. In contrast, the BTC DAA added a total of 4,596 hours of drift from Jan 3, 2009 through Aug 1, 2017. Out of the current 6279 hours of accumulated drift, 4596 hours (73%) came from the BTC DAA. Since 2017, BTC has added even more drift, and BTC is now at 5489 hours of drift. BTC currently has 87.4% as much total drift as BCH does. In about 3 years, if both chains maintain their current trajectories, BTC will have more drift than BCH. > Grasberg overcomes the mining/issuance timeline(by adding delay) that was affected by the current DAA, while All DAAs that have ever existed in crypto have ignored ancient history and marched forward. The purpose of blockchains is to make an immutable record of past events. The past is the past.

Commented by /u/jtoomim in /r/btc on August 4, 2020 14:23:44

It's going to be hard to pick any name other than BCH for the non-ABC chain because it is going to be every current full node implementation other than ABC on that chain. I get that you're trying to be fair and not assume the name, though. Maybe just "BCH minus ABC"?

Commented by /u/jtoomim in /r/btc on August 4, 2020 14:18:20

> to overcome the patchwork for the upgrade/fork I don't know what you're referring to. There's no patchwork with ASERT.

Commented by /u/jtoomim in /r/btc on August 4, 2020 14:13:30

Satoshi's halving schedule is BTC's. He defined the halving schedule as one halving every 210,000 blocks, and said that > To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they're generated too fast, the difficulty increases. He did *not* say that it would be exactly 600 seconds per block on average. He merely specified the algorithm. The algorithm (on BTC) has averaged 569.2 seconds per block since block 0 so far. *That is Satoshi's schedule.* If you add a delay to go back to 600 seconds from the genesis block, you're *rejecting* Satoshi's issuance schedule, not going back to it. If Amaury's goal was to get rid of the drift that the EDA had generated, then he should have chosen block 478557 (the last BTC block) as the reference, not block 0.

Commented by /u/jtoomim in /r/btc on August 4, 2020 14:02:49

> don't want BCH halving to occur before BTC BCH's next halving will happen after BTC no matter what. Even if we don't change the DAA, that will be true. Current BTC block: 642,222 Current BCH block: 646,910 BCH has been averaging 601.7 seconds per block since Nov 13, 2017. At that rate, the next BCH halving will be in 1344.7 days Over the last 200,000 blocks, BTC has averaged 577.55 seconds per block. At that rate, the next BTC halving will be in 1322.1 days. This also doesn't matter at all. Halvings are a non-event.

Commented by /u/jtoomim in /r/btc on August 4, 2020 13:57:49

No, that's not true. ASERT uses the Nov 15, 2020 hard fork as a reference, and keeps a schedule of approximately 600 seconds per block starting from the moment it is activated. ASERT does not care what happened before the Nov 15, 2020 hard fork. > This "change" from satoshi's issuance schedule can and will be used as a precedent to future major changes to BCH. ASERT is just continuing the schedule that we are currently on. There is no change. "Change" would mean switching to a schedule that we are not currently on. That is what Grasberg does.

Commented by /u/jtoomim in /r/btc on August 4, 2020 13:48:49