Bitcoin will have high fees. The block size shouldn’t be increased.

We’ve had this debate for years, and it’s cropping up again. I think we need to nip this in the bud now.

If you increase the blocksize then you will decrease fees, thus making it easier for Veriblock and Coinbase to spam the network and never bother to optimise their platforms. When VeriBlock temporarily paused their system, confirmed on-chain transactions dropped from [~325,000/day to ~225,000/day.]( That should give you a sense as to just how much spam is being caused by low fees.

If we go to 2MB, those companies will fill blocks up again. Fees will rise to the exact same level in a short period. Now you’ve got the same situation as before, except a whole lot of full node operators can’t keep up with the bandwidth so turn their nodes off. Blocks will propagate through the network slower, centralising mining and providing an unfair advantage to the previous block winner.

The final point, which really should shut this whole discussion down, is that **we’ve already been through this. Bcash exists.** It had the same code base as bitcoin at the time of the split. If that’s how you think Bitcoin should operate, then just use bcash and be done with it. LN can be built on top of bcash, it’s just barely anyone wants to develop for it because overwhelmingly technical people understand why raising the blocksize is a bad idea.

If you’re right, bcash will overtake Bitcoin and you can tell everyone I told you so. But you’re not going to convince us to change the block size.

And for the love of Christ, if you’re a non-technical investor, maybe consider a little humility before insulting the Core devs and demanding a solution you know nothing about to a problem which doesn’t exist.

View Reddit by MccawsleftfootView Source

42 Comments on Bitcoin will have high fees. The block size shouldn’t be increased.

  1. Mccawsleftfoot

    Guys, it’s not 2014 anymore. If you provide these companies with a free service they will use it all day long, at your expense.

  2. Cthulhooo

    Coincidentally the number of transactions on omni also rapidly doubled in the previous week then fell down over last few days.

  3. ureindanger

    But you do realize that eventually we will have to raise block size to 64-128 mb if we want global LN adoption to happen, right? So my question is, until when do you wait with the raise of the block size?

    In the mean time, until that happens, other chains will just get new users and the congestion will become frustrating to deal with.

    Other than that LN adoption will get crippled as well because it will become expensive to open and close the channels.

  4. zomgitsduke

    The block size should be steadily increased slowly over time. Like, painfully slow, but still with expansion carefully done over time.

    Expand it enough to expand capacity, but keep it at a small enough rate to stay well under Moore’s Law.

    My proposal is 12.5% increase every 4 years, opposite of each halvening, so we get one of those two things to happen every 2 years.

    Edit: Before the downvotes of “BIG BLOCK BAD BOO”, please consider my proposal is a 1/8 growth every other year, where as Moore’s Law predicts 2X growth in that same time period. This constantly keeps the growth scaled by a much lower exponential rate of growth. Is it perfect? No. It’s a concept I’d like to discuss before y’all break out the pitchforks.

  5. bitusher

    > But you’re not going to convince us to change the block size.

    Changing the block weight limit is certainly fine in the future if needed and if it finds consensus and part of the original scaling “roadmap”

    >”**Further out**, there are several proposals related to **flex caps or
    incentive-aligned dynamic block size controls** based on allowing miners
    to produce larger blocks at some cost. These proposals help preserve
    the alignment of incentives between miners and general node operators,
    and prevent defection between the miners from undermining the fee
    market behavior that will eventually fund security.”

    Keep in mind that Bitcoin is required to eventually hardfork regardless so we may as well include many wish list items including a permanent scaling option-

  6. TombStoneFaro

    Appreciate this intelligent post. If possible, please explain, as vaguely as necessary, your technical qualifications.

  7. GibbsSamplePlatter

    We already have up to 4MB blocks. With ~45% segwit adoption we average 1.3 MB blocks, and with more adoption and bech32 this will go even higher.

    Fee market is required for the survival of Bitcoin’s non-inflationary system. Let spammers exhaust their warchests and fund hashrate security.

  8. CaptainPatent

    > If we go to 2MB, those companies will fill blocks up again. Fees will rise to the exact same level in a short period.

    I don’t understand… If you reach full blocks again, wouldn’t you have the same fee pressure to optimize, but with double the throughput now?

    If users pay per byte, wouldn’t you double the cost to spam a full block?

    I can’t see how higher supply and the same demand wouldn’t result in a lower equilibrium price.

    > Now you’ve got the same situation as before, except a whole lot of full node operators can’t keep up with the bandwidth so turn their nodes off. Blocks will propagate through the network slower, centralising mining and providing an unfair advantage to the previous block winner.

    Wait, in a peer-to-peer network, if you exclusively prune nodes that are having latency issues, you’re going to actually speed up the overall propagation. This relates to an old [ethereum uncle rate statistic post](

    In fact, if a node barely has enough bandwidth to keep up, by nature it would be behaving in a very leechy manner.

    I think it’s important to pay attention to the node set available and make sure that any proposed increase would result in a statistically non-gamable node set, but considering network speeds have approximately doubled from early 2017, I have a hard time seeing how a 2MB base block would decentralize the network in any meaningful way.

  9. otk16

    Who do you think is going to pay the miners if the block reward continues to decrease and if the number of transactions per block gets artificially throttled?

    How is LN going to help you if opening/closing a channel will cost a few thousand dollars (or more) for an on-chain transaction?

    The only benefit of small blocks is keeping the network more decentralized by allowing full nodes without lots of bandwidth to participate in the network. What do you think would be the motivation behind running such a node, if the operator is outpriced and can’t send any coins because transaction fees are too high?

  10. Kalin101

    The blocksize has to and will be increased. The question is when. We should be doing everything we can to scale. Both soft and hard forks, on and off chain etc.

  11. knaekce

    So, what is the optimal block size? Why 1MB? Why not 100kB, or 2MB, or 10MB?

    If the fees are >100$, LN will not work very well for small payments, too. If closing a channel is expensive and the “dust” limit is higher than the value of most transactions, it’s not really trustless.

  12. yagoasp

    >except a whole lot of full node operators can’t keep up with the bandwidth so turn their nodes off
    >Blocks will propagate through the network slower, **centralising mining**
    >LN can be built on top of bcash, it’s just barely anyone wants to develop for it **because overwhelmingly technical people understand why raising the blocksize is a bad idea**

    Is there any evidence for these 3 theses?


    How does a spam transaction differ from a non-spam transaction? As far as I know, blockchain is a chain of blocks filled with messages and there is no division between spam and non-spam. A black person transaction will have the same priority as a white person transaction, subject equal fees. There is no way to get the intentions of individual transactions.

    In any case, if you recognize the problem of spam, then you recognize the vulnerability. Vulnerabilities should be eliminated. Waving hands and blaming others will not fix the problem.


  13. cthulhuburger

    While I understand the sentiment, there are also significant costs to high fees which you are not addressing. Higher fees decrease bitcoins utility, which pushes use-cases to 2nd-layer (which itself isn’t purely good) and to altcoins.

    The fee market also brings an insane amount of complexity to bitcoin. With altcoins I can create a reliable wallet by simply setting the feeRate to a fixed amount, and propagate. With bitcoin we have CPFP, RBF, fee-estimates that are complex enough *not a single wallet* handles it all well. High-fees also forces users to work on complex optimizations (e.g daily batch sends).

    Bitcoin itself has a lot going for it: first-mover advantage, network effect, most decentralized crypto. But I don’t think it’s reasonable to expect bitcoin to survive-long term if it has painfully high fees either.

    I think we need to aim at a future where fees are balanced between too low (see: tragedy of commons problems) and too high (see: decreased utility)

  14. cryptoplayingcards

    > If you’re a non-technical investor

    How about a non-technical *user*? And I think it’d be great to explain to those non-technical users why the block size shouldn’t be increased, in layman’s terms. Because obviously, when new users start coming in, they WILL ask questions about why you have to pay fees and why it’s not instant, like some other cryptocurrencies are.

  15. eyeofpython

    >**Bcash exists.** It had the same code base as bitcoin at the time of the split. If that’s how you think Bitcoin should operate, then just use bcash and be done with it

    I couldn’t agree more. If you don’t want to pay high fees for a secure on-chain transaction, go ahead and switch over to the other network. The roadmap there is completely different from Bitcoin, i.e. trying optimize on-chain scaling vs. building a layer 2 solution. No need to have two coins that do exactly the same thing.

  16. thesoleprano

    “dont optimize the hardware, just make the computers bigger!”

    block size doesn’t need to be bigger. and yes, bcash exists because of this lol. and all the innovations are happening on BTC and not Bcash due to this same reason.

  17. NaabKing

    Well, i would say Litecoin exists, it has low fees, is not hostile, works great. Also, you can buy LTC and do a atomic swap to BTC Lightning with VERY low fees. Just show how LTC and BTC can work great together.

  18. VinBeezle

    There’s an obvious disconnect in the minds of the technically-inclined around here.

    You focus on security, code, and technicals, to the detriment of usability, affordability and most importantly: the purpose of the Bitcoin invention in the first place. Financial sovereignty for everyone.

    When “everyone” includes 5 billion people who can’t afford to onboard to LN, you’ve created a problem. Not solved one.

    This discussion is not just a technical discussion. It’s a humanitarian discussion.

    The single biggest priority for Bitcoin should be people. It is inappropriate for anyone to assert that Bitcoin “should be expensive”.

    Eliminating that “spam“ you keep talking about also eliminates 80% of the worlds population, using it for real transactions.

    Marginal increases in the block size DO NOT automatically translate to centralization of nodes. Nobody’s expecting the block size to go unreasonably high.

    **A balancing act between L1 and L2 scaling and keeping both free/cheap to onboard, is the most obvious, common sense approach.**

    We need to keep bitcoin usable for everyone.

    If you’re going to change bitcoin into something that only rich people can use, you have changed it into something that has no semblance to the original.

    Edit: anyone who downvotes this – wow.

  19. LedByReason

    OP makes very few good arguments. Most of what he presents is an appeal to authority. I’m not going to flush out both sides of the block size argument, because there have been many posts that have done it before.

    But I would caution anyone with any significant amount of money in BTC to make sure you understand BTC, LN and other cryptos very well, especially their limitations and design tradeoffs. Make sure you *use* BTC, LN and lots of other alternatives, so that you understand what they are like for users.

    Do your own research. Don’t take someone’s word on anything, as their incentives to argue a certain line of reasoning may not be obvious and may not align with your interests.

    Lastly, don’t use short-term price movements as evidence that a certain crypto is better or worse than others.

  20. mabezard

    Someone go ahead and create “blockcoin” or whatever. I wouldn’t mind getting more forked coins again and sell them back to you for your bitcoin at the foothills of our next rally.

  21. TheWierdGuy

    > The final point, which really should shut this whole discussion down, is that **we’ve already been through this. Bcash exists.**

    I am yet to see the model with specific metrics that was used to determine 2MB is the current appropriate block size. There is a LOT of room between not expanding the block size and indefinitely expanding it. What we don’t need is more post like yours, that come here defending a conclusion without a single reference to a scientific model that can explain how and why the conclusion of 2MB limit has been reached.

    What are the specific variables and constraints affecting storage, memory, processing power, network bandwidth and latency. Where is the model with actual numbers? Technology has evolved and has gotten cheaper in the past 10 years, and will continue to do so in the future.

    It is absolute nonsense to declare the blocksize should not be increased while all factors that drive the determination of its size are improving. How in the world will we know when and how much it is possible to increase the block size without a model?

    Where is the model? Where is the model? Where is the model? Save yourself and everyone else from a pointless argument and and just present the freaking model.

  22. YogaDream

    My issue with discussions about big vs small blocksize is that size is relative. So how do we know BTC or BCH or whatever other fork’s blocksize is “correct”? There should always be room for experimentation, and things do change with time. So maybe this is why the discussion is reoccurring.

    EDIT: Only time will tell who’s right, and whether there is only one “correct” answer to the question of blocksize.

  23. Myflyisbreezy

    Ok i know paper money and fiat and fractional reserve is not favorable around here, but hear me out.

    In the start of the americas, independent banks issued their own notes backed by the banks holding of bullion.

    In the future, banks might issue their own notes backed by private holdings of bitcoin.

  24. reddit4485

    However, we can increase fees specifically for OP_RETURN transactions or limit the number per block. This would make it much more expensive for Veriblock to add their spam but still keep block size the same.

  25. eqleriq

    > We’ve had this debate for years, and it’s cropping up again.

    Not really. It’s a few shit actors shilling b’cash, they never went away, they just change their rate of spamming.

    > That should give you a sense as to just how much spam is being caused by low fees.

    There is no spam, there is just using. As long as there is incentive to use bitcoin shittily, it will be used shittily. it doesn’t mean anything needs to change.

    > because overwhelmingly technical people understand why raising the blocksize is a bad idea.

    it doesn’t require “technical people” to understand that raising the blocksize creates an incentivized arms race, where the more you increase the blocksize, the more centralized and less feasible it is for you to store the chain on a tiny drive which gives those who would inflate it more power.

    If ONE b’cash adherent would diagram out this obvious inflation, and propose a stopping point or cap on it, or even a rate of increasing, they’d have more people’s sympathy. Instead they have a chain built entirely on the premise of forking pre-existing bitcoin holdings for free money, and making use of BaNnEd features with their shitty asicboost backing.

    But really, it is in those adherents’ best interests to just spam and slow adoption for bitcoin while treating their unused, falsely valued chain very delicately.

    If bitcoin went away tomorrow and there was only b’cash, the spam would stop, negating the need for bigger blocks but at any moment the spam could resume creating bigger and bigger blocks and pushing out many nodes… And I bet all of the features they resist now would immediately be implemented.

  26. perogies

    To me it means that early investors and people who can afford larger investments (the accredited type of investor) will get their digital gold. Massive fees won’t matter to them, and it will be a great store of value. But, this will close the door to BTC for quite literally *billions* of people globally. This includes most of you if you weren’t already holding. Coins like BCH will have a much much larger group of people able to cheaply and easily use it as money, and when something is used globally for commerce it becomes a store of value because of that widespread use. You will have a bankers coin, the rest of the world will have Bitcoin as it was intended. I’ll have both so I’m all set either way lol.

  27. sxz54t

    >But you’re not going to convince us to change the block size.

    When you say “us”, just who are you talking about so we know who owns the BTC network. Thanks.

  28. Koinzer

    I support Luke-JR idea to lower the block size limit to 300Kb. This will:

    * force maximum utilization of LN
    * drive out spam transactions
    * force most service to optimize their service as much as possible
    * enable maximum decentralization, since everybody will be able to run a node

    Plus, it’s just a soft fork, very easy to implement.

  29. doobur

    Thank you for this post, I’ve tried to look for an honest explanation as to why the space is so divisive between btc and bch. I was called a “sock puppet” or a “shill” even though I was just looking for an honest explanation. The only thing I was able to gather was that people like Roger Ver were able to manipulate it due to its small volume.

  30. Lazyleader

    Can anyone tell me if OP has anything to say in the Bitcoin community? I don’t know him but according to his comments he is mentally unstable. I might have to sell my Bitcoin if he is speaking for the core team.

  31. ivanraszl

    The fact that we’ve been through the debate once before, doesn’t mean it should never come up again, and again in the future. As times change technology improves, Bitcoin is different now with Segwit and LN, than a few years ago, thus the discussion will change. Even the Core scaling plan published on []( mentions a block size increase as an eventual method to support second layers.

    It all comes down to node count. The more full nodes verify transactions the more decentralized Bitcoin is, and thus more valuable it is. However, we should not automatically assume that smaller blocks increase or protect node count. The size of the blockchain is just one of the factors that limit full node operators. Let’s look at a list of incentives in both scenarios:


    **Small (1MB base block):**

    * Smaller blockchain requires less disk space. Disk space requirements grow continuously, so even at 1MB one will have to buy a bigger disk eventually. At 1MB the blockchain [grows]( only at ~13GB/year. Which means a 500GB disk will last another 20 years before it fills up. With ~2M the same legacy disk would only last 10 years, which is still perfectly OK. The price difference between a 500GB and 1TB disk is now like ~$15 only. Clearly, disk space is not a significant problem.
    * Smaller blocks require smaller CPU capacity to process. This seems to be a limiting factor at initial sync only. Currently it takes days on a slow computer to set up a full node. Consequently, the processing is not an issue even with slow CPUs. Increasing the block size would only gradually increase the blockchain at the end of the sync, so initial sync would not double, just increase by 10-15% in a year, so it only means a few extra hours for a year or two. If we look at the lowest cost computers Raspis, the CPU [performance improves]( 300% on each version every few years. So, increasing the block size slowly shouldn’t pose any issues for node operators.
    * High fees incentivize users to run full nodes + LN nodes to be able to use LN and avoid high fees. But very high fees also prohibit safe operation of LN. So, here we need to be very careful and balance needs.


    **Large blocks (~1MB+ base block):**

    * Smaller average mempool crashes weak full nodes less often. See how the [mempool increase](,30d) in the beginning of April coincides with [drop in node count]( Small increases in memory needs can be managed as memory is getting cheaper too, but if the mempool growth to 400-1,000MB from the ‘normal’ 10-20MB, that’s too fast and node operators may run into issue. They can of course drop transactions, but then they are becoming less useful as a full node.
    * More transactions with low fees attracts more Bitcoin users and reduces the need for alts — Bitcoin Maximalism becomes reality — and thus there will be more users and merchants running full nodes. We may lose 10% of nodes due to the larger blocks, but we may gain 20% more nodes due to increased usage, resulting in a net gain of 10%, which increases the safety and value of the network.
    * Safer LN due to ample on-chain transaction capacity and low fees increases LN, thus more people set-up full Bitcoin nodes to be able to run LN nodes. However if the fees are very low, people may not care to switch to LN in the first place, just keep transacting on the main chain. Thus, there should be a good balance. Maybe the increase is not straight to 2MB, instead just 1.25MB, slowly increasing so we keep fee pressure high enough to incentivise second layer growth, yet low enough to enable its security.

    Assuming you’re convinced of a small block size increase is beneficial to Bitcoin, is it even possible to push through a block size increase safely? Yes, but only if the following criteria has been met:

    * Schnorr has been implemented and thus we ran out of on-chain optimizations. Once we have Schnorr, any block size increase will be much more impactful in term of transaction throughput and thus less controversial.
    * The block size increase is gradual, slow. and long term (we don’t need to hard fork multiple times). Not 2MB / 4MB / 8MB, but rather something like +0.25MB per year, resulting in 1.25MB, 1.5MB, 1.75MB, 2MB. This is very modest and will keep a balance between moderately high fees to keep the network secure, and the pressure for more on-chain transaction space to keep the second layers safely operational. Now maybe it’s not: +0.25MB, but only: +0.1M or higher: +0.5MB. I don’t know… But the idea is to keep the kettle boiling, and yet keep relieving some pressure on it so it doesn’t just blow up.
    * The block size increase is supported by the dozens of Core devs working on Bitcoin, and is implemented in the Core software. The hard fork should not split the community. Obviously we can’t make everybody happy. Some will want to lower the blocksize rather than increase it. Others want a faster increase. But there is a good chance we won’t have a significant split in the chain or the community if the initiative is supported by the most well-respected devs in the Bitcoin ecosystem.

  32. theymos

    > If we go to 2MB, those companies will fill blocks up again.

    First, blocks can already be 2MB (or larger) due to SegWit.

    If you double the maximum tx throughput, then you double the amount that has to be spent every <time_period> in order to keep the fee high. If for example an attacker is trying to keep the fee at $10/kB and is currently spending $100k per hour in order to do so, then doubling the transaction throughput would require him to either double his spending to $200k *per hour* or else halve his target fee to $5/kB. These are not negligible effects: both add up quick. (And it functions the same with high fees due to real market forces rather than an attacker.) It’s true that there’s essentially an unlimited demand for $0/kB transactions, but there’s much less demand for $0.10/kB transactions, and even less for $1/kB transactions, etc. There’s a limit on just how much Veriblock can spend per hour in total, and the more transaction throughput this is spread out among, the lower the network-wide fee that results.

    When considered alone, higher transaction throughput is good. It allows for additional use-cases and more useful economic activity. Striving for high fees as a goal unto itself is nonsensical. As such, the max block size should be **as large as *safely* possible**, where “safety” includes factors like decentralization and possibly mining incentives. If the max block size was 100kB, then this would be far below what is necessary for safety, and it would be correct to increase it. When Luke-Jr argues (mostly alone) that the current max block size is too high, even he does this from the perspective of *safety*, not because he really likes the idea of $20/tx fees. You could argue that it would be safe to increase the max block size from its current value, but such arguments should be entirely from the point of view of *safety*, and “fees are too high” should never be part of the argument. If airplane tickets are too expensive, you acknowledge this as the market’s natural reaction to a limited resource: you don’t take down the “max capacity” sign to fit more people. But you also don’t need to fill up only half of the plane at double the ticket price just for the hell of it, which is sort of the vibe I get from this post.

Leave a Comment

Your email address will not be published. Required fields are marked *