I am trying to mine on a private network.
How does one go about creating a genesis block for a private network in frontier ethereum?
I have seen: https://blog.ethereum.org/2015/07/27/final-steps/ but this is to get the public Genesis block.
{
"nonce": "0x0000000000000042",
"difficulty": "0x000000100",
"alloc": {
},
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"timestamp": "0x00",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"gasLimit": "0x16388"
}
You can simple take the generated one here and modify the accounts and balances.
Also put the gas limit to a higher number like 0x2dc6c0 (3mio) and move the difficulty down to 0xb
You can basically create any Genesis Block that you like, as long as it is valid according to the Yellowpaper, 4.3.4. Block Header Validity.
The Genesis Block does not indicate on which Blockchain a miner works. This is defined by connecting to the right peer-to-peer network or, if you are using the discovery mechanism on a network with multiple Blockchains running, using the network ID.
The (Genesis) Block describes the parameters of this specific Block and they are set according to the Miner's algorithm. Of course, any illegal behavior will be rejected by the consensus mechanism.
In conclusion, you can use the same GB for all custom Blockchains.
The values that have to be of correct in terms of mathematical validation are nonce (Proof of Work), mixhash (Fowler–Noll–Vo reduced DAG value set), timestamp (creation time). The geeky values in this example are a copy from the original Frontier release Genesis Block. The parentHash points to the parent block in the chain and the Genesis Block is the only Block where 0 is allowed and required. alloc allows to "pre-fill" accounts with Ether, but that's not needed here since we can mine Ether very quickly.
The difficulty defines the condition to satisfy by the Miner (hash) algorithm to find a valid block. On a test network, it's generally kept small in order to find a block for every iteration. This is useful for testing since needed to execute transactions on the Blockchain. The block generation frequency is kind of the response time of the Blockchain.
The gasLimit is the upper limit of Gas that a transaction can burn. It's inherited into the next Block. extraData is 32 bytes of free text where you can et(h)ernalise smart things on the Blockchain :) The coinbase is the address that got the mining and transaction execution rewards, in Ether, for this Block. It can be 0 here, since it will be set for each new block according to the coinbase of the Miner that found the Block (and added the transactions).
I have documented this a bit more in detail here.
Hope this helps :)
{
"config": {
"chainId":2010,
"homesteadBlock":0,
"eip155Block":0,
"eip158Block":0
},
"gasLimit": "0x8000000",
"difficulty": "0x400",
"alloc": {}
}
Only above Attributes are accepted in Geth version 1.9 (go1.9)
Specifically, the genesis block building for private network is well explained in this short article.
One thing that I want to mention here is that the only difference of the genesis block is that it has no reference to the previous block.
Related
So I recently was looking for a way to add extra metadata to logs and found out that syslog got me covered. I can add custom metadata using SD-ID feature like this:
[meta#1234 project="project-name" version="1.0.0-RC5" environment="staging" user="somebody#example.com"]
The problem is that 1234 has to be a syslog private enterprise number.
I assume those are given to big companies like microsoft or apple, but not to indie developers.
So My question is, is there a reserved number for internal use that everyone could use without registration for internal purpose?
If you use RFC5424-formatted messages, you can (or could) create custom fields in the SDATA (Structured Data) part of the message.
The latter part of a custom field in the SDATA is, as you mentioned, the private enterprise number (or enterpiseId).
As per RFC5424 defined:
7.2.2. enterpriseId
The "enterpriseId" parameter MUST be a 'SMI Network Management Private Enterprise Code', maintained by IANA, whose prefix is iso.org.dod.internet.private.enterprise (1.3.6.1.4.1). The number that follows MUST be unique and MUST be registered with IANA as per RFC 2578 [RFC2578].
Of course it depends on what you're using it for, if it's only for local logs, you can use any enterpriseId or you can even use a predefined SDATA field with a reserved SD-ID and rewrite it's value. (See: syslog-ng Guide)
I've been trying to get the HTTP verbs right lately, however I have a doubt regarding using PUT, PATCH or even POST for the following scenario.
The front end part is sending the following JSON data:
{
name: "Spanish-01",
code: "ESP01",
students: [{
IdStudent: 1,
name: "Peter Parker"
},
{
IdStudent: 2
name: "Ben Reilly",
dirtyRemove: true
}]
}
The back end code will update the Class record (e.g name and code). However, it will also delete the students with flag dirtyRemove, and those live in another table called Student.
So what's the rule here? Since PUT and PATCH according to w3.org here is for updating an existing resource. In this case the back end is both updating and deleting at the same time?
Should I use PUT or PATCH or neither?
NOTE: Don't mind about the FE part, I minimized the scope in order to get a more straightforward example
How your resources are implemented internally using tables is an implementation detail. It doesn't matter.
That said, your example payload doesn't fit PUT (to remove a student, you would omit it). It might fit PATCH, if you properly label the payload with a content type describing what semantics you expect.
Nit: the HTTP spec is not a W3 document, and the version you're looking at is outdated.
I need help in a scenario when we do multipath updates to a fan-out data. When we calculate the number of paths and then update, in between that, if a new path is added somewhere, the data would be inconsistent in the newly added path.
For example below is the data of blog posts. The posts can be tagged by multiple terms like “tag1”, “tag2”. In order to find how many posts are tagged with a specific tag I can fanout the posts data to the tags path path as well:
/posts/postid1:{“Title”:”Title 1”, “body”: “About Firebase”, “tags”: {“tag1:true, “tag2”: true}}
/tags/tag1/postid1: {“Title”:”Title 1”, “body”: “About Firebase”}
/tags/tag2/postid1: {“Title”:”Title 1”, “body”: “About Firebase”}
Now consider concurrently,
1a) that User1 wants to modify title of postid1 and he builds following multi-path update:
/posts/postid1/Title : “Title 1 modified”
/tags/tag1/postid1/Title : “Title 1 modified”
/tags/tag2/postid1/Title : “Title 1 modified”
1b) At the same time User2 wants to add tag3 to the postid1 and build following multi-path update:
/posts/postid1/tags : {“tag1:true, “tag2”: true, “tag3”: true}
/tags/tag3/postid1: {“Title”:”Title 1”, “body”: “About Firebase”}
So apparently both updates can succeed one after other and we can have tags/tag3/postid1 data out of sync as it has old title.
I can think of security rules to handle this but then not sure if this is correct or will work.
Like we can have updatedAt and lastUpdatedAt fields and we have check if we are updating our own version of post that we read:
posts":{
"$postid":{
".write":true,
".read":true,
".validate": "
newData.hasChildren(['userId', 'updatedAt', 'lastUpdated', 'Title']) && (
!data.exists() ||
data.child('updatedAt').val() === newData.child('lastUpdated').val())"
}
}
Also for tags we do not want to check that again and we can check if /tags/$tag/$postid/updatedAt is same as /posts/$postid/updatedAt.
"tags":{
"$tag":{
"$postid":{
".write":true,
".read":true,
".validate": "
newData.hasChildren(['userId', 'updatedAt', 'lastUpdated', 'Title']) && (
newData.child('updatedAt').val() === root.child('posts').child('$postid').val().child('updatedAt').val())”
}
}
}
By this “/posts/$postid” has concurrency control in it and users can write their own reads
Also /posts/$postid” becomes source of truth and rest other fan-out paths check if updatedAt fields matches with it the primary source of truth path.
Will this bring in consistency or there are still problems? Or can bring performance down when done at scale?
Are multi path updates and rules atomic together by that I mean a rule or both rules are evaluated separately in isolation for multi path updates like 1a and 1b above?
Unfortunately, Firebase does not provide any guarantees, or mechanisms, to provide the level of determinism you're looking for. I have had the best luck front-ending such updates with an API stack (GCF and Lambda are both very easy, server-less methods of doing this). The updates can be made in that layer, and even serialized if absolutely necessary. But there isn't a safe way to do this in Firebase itself.
There are numerous "hack" options you could apply. You could, for example, have a simple lock mechanism using a dedicated collection for tracking write locks. Clients could post to a lock collection, then verify that their key was the only member of that collection, before performing a write. But I hope you'll agree with me that such cooperative systems have too many potential edge cases, potential security issues, and so on. In Firebase, it is best to design such that this component is not a requirement in the first place.
What javascript API calls are needed to set the grade after completing an activity? Now I have these three calls:
LMSSetValue("cmi.core.score.min", 0);
LMSSetValue("cmi.core.score.max", 100);
LMSSetValue("cmi.core.score.raw", score);
I also set the status to completed:
LMSSetValue("cmi.core.lesson_status", "completed");
When I complete the activity as a student, sometimes I can see icon which tells that
activity is completed ("1 attempt(s)"), sometimes not. The gained score is never there.
Desire2Learn is at version 10.1
Not a SCORM expert by any means, but someone here that knows more about it than me makes these points:
You also need to call Commit and Terminate and/or LMSFinish; you can find some good technical resources to help developers at the SCORM website, in case you don't already know about them.
To verify scores and status getting to the Learning Environment, you can check the SCORM reports in the Web UI (Content > Table of Contents > View Report), which is the standard place to view SCORM results.
If scores are set there, you can get them into the grade book in two ways:
You can preview the content topic as an instructor: below the topic view, you'll find a spot to associate a grade item with the topic.
If the DOME configuration variable d2l.Tools.Content.AllowAutoSCORMGradeItem is on for the course, that should automatically create a grade item for that SCORM content object.
As Viktor says, you must invoked LMSCommit after using LMSSetValue, or else the data will not be persisted ('saved') in the LMS.
LMSSetValue("cmi.core.score.min", 0);
LMSSetValue("cmi.core.score.max", 100);
LMSSetValue("cmi.core.score.raw", score);
LMSSetValue("cmi.core.lesson_status", "completed");
LMSCommit(); //save in database
LMSFinish(); //exit course
Note that "LMSSetValue" is not an official SCORM call, it means you're working with a SCORM wrapper of some kind. Therefore where I say LMSCommit and LMSFinish, you might actually need to use different syntax -- I'm just guessing about the function names. Check your SCORM wrapper's documentation. The point is that you need to commit (save) and terminate (finish).
I am using gsm cell data to get the current device position. To do this I use the Google Maps Geolocation API. All fields seem to be optional in the first part of the needed JSON parameters (URL: https://www.googleapis.com/geolocation/v1/geolocate?key=API_key):
{
"homeMobileCountryCode": 310,
"homeMobileNetworkCode": 410,
"radioType": "gsm",
"carrier": "Vodafone",
"cellTowers": [
// See the Cell Tower Objects section below.
],
"wifiAccessPoints": [
// See the WiFi Access Point Objects section below.
]
}
Do the first 4 parameters homeMCC, homeMNC, radio Type and carrier have any influences on the accuracy? or the response time? I could not make out any differences.
I can believe that the Google database of cell IDs is organised by carrier and network type, so there might in theory be a quicker response if you supply these parameters, but it would surely be negligible. Can't think of a technical reason why it would need to know your home operator details too. The only information these parameters would give them is (1) whether you're roaming or not and (2) any stored information that they might be holding about individual operators. Does Google have special agreements with any operators?