@linzheming is on PowPing!

PowPing is a place where you can earn Bitcoin simply by socializing, for FREE.
Never tried Bitcoin? It's OK! Just come, socialize, and earn Bitcoin.
Check out linzheming's activities
Total Economy: 0.4 USD
There's no Upload to blockchain if you can't download later. There's no guarantee that you can getrawtransaction() from nodes for free. If you want to get your data back after upload, SLA should be needed. Will you "upload" your data to nobody's server and hope to get it back?
marquee tipped:
aquamane tipped:
Who has an SLA with who? Apps may have an SLA with miners, but one of the perceived benefits of storing data on chain is that a user can (in theory) migrate to another provider.
electrumsv replied:
You have a SLA with a data storage service. You may have to upload the transaction to them at the same time you broadcast it. Or if the broadcast service is part of the data storage service.. This whole fire and forget magical model of on-chain data storage seems like wishful thinking to me. -- rt12
linzheming tipped:
linzheming replied:
Agreed with you rt12. Let me elaborate it a little further. The miner node network together is bitcoin service provider for individual services. The service is provided in a more resilience "decentralized" form so that the individual services will be built on a solid ground. Service providers (Apps) go to several miners to have their agreement signed, and easily switched between them. They pay fees to miners to exchange for the ability to be able to chose ("migrate") among different miners. If they (Apps) pay specific data storage service provide for storing data, the service provide could simply has merkle proof which proves the integrity of the data in one transaction, no difference between on-chain and off-chain in this point.
shadders tipped:
shadders replied:
The analogy is paying to have data stored on Amazon S3. It's redundant by default, you can easily migrate it to another storage provider and it doesn't stop you keeping your own local copy. Bitcoin is strictly better than this option (not factoring in cost) because if you stop paying Amazon S3 the data is gone. If you stop paying your Bitcoin provider there's a very good chance you'll be able to get the data back at a later date. You just shouldn't assume this is garunteed. If you want this garuntee put the data in a spendable output and be prepared to pay a little more in tx fee.
marquee tipped:
marquee replied:
Exactly@shadders, this is what a lot of people miss. If you really need to guarantee that your data is not pruned, put it in a spendable output. It still doesn't mean that a miner will provide you that data for free, and app developers should cache all their data locally.
aquamane replied:
@electrumsv@linzheming IIUC, it would make sense that I should only have to broadcast the TX to the network once. This would be the fire + forget - Event Sourcing or Producer/Consumer - model; I shouldn’t have to multicast a TX. Instead, a durable storage service provider can be monitoring for TX with specific criteria.. maybe by deploying a discrete stamp of a Planaria configured with one purpose: Index my data to durable storage using Bitfeed/Bitbus. If I’m tracking correctly, the only thing I’m not 💯 on is if miners can truncate the data before it is available downstream, or if the service provider would have to monitor + capture + correlate confirmed-txs + purge unconfirmed txs.
linzheming replied:
The durable storage service provider should get your data and send to the miners. No need to monitoring for TXs.
What is your opinion on zero-satoshi tokenization, as proposed by Shadders in a recent paper? Will Mempool support 0-satoshi tokens so long as transaction fees are paid?
linzheming replied:
I'm open for the zero-satoshi token, as long as we're doing business with an entity, that would not be a problem. But I don't think miner will accept zero-satoshi output and keep it from nobody. although the output is zero in amount, user still have to pay. they might need to pay enough for miners to validate their transactions later. Have to go deeper into the rabbit hole.
electrumsv replied:
Makes a lot of sense to me. -- rt12
shadders replied:
Just to correct the record... I didn't propose the idea, Satoshi did when he released Bitcoin v0.1.0. It has always been possible.
slictionary tipped:
slictionary replied:
Know you didn't. Just not sure it's in BSV's best interest, but hope I'm missing something obvious. Your point is well taken, and why I'll make it my personal business to get Craig to talk about this topic a bit more, as it's delicate like btc-ers ego.
arbusto replied:
Can u link the paper?
slictionary replied:
It wasn't a paper on Zero-Satoshi-Outputs, it was just a mention that dust limit is going to zero eventually, which is the method they become possible. https://bitcoinsv.io/2020/09/16/beyond-micropayments-the-rise-of-nano-services/ Firstly the creation of dust outputs themselves are still limited to some degree by the dust limit. This limit will be removed entirely by the end of the year allowing even 0 value outputs. In the meantime the 1.0.5 release of bitcoin has at least fixed the hard limit and made it a function of the relay fee set by the Miner. This means that Miners who have upgraded to 1.0.5 should accept outputs greater than 140 satoshis.
Totally agree.