I seem to be having an issue with an npc_text. The NPC is speaking orcish (or something) even though lang0 is set to 0.
Here's the SQL:
INSERT INTO npc_text (ID,text0_0,text0_1,lang0,Probability0,VerifiedBuild) VALUES
(65000,'Greetings $N, ready for some training?','Greetings $N, ready for some training?',0,0,12340),
(65001,'I cannot train you, $c. You need to talk to your class trainer.','I cannot train you, $c. You need to talk to your class trainer.',0,0,12340);
npc_text is cached in the client. I deleted my client cache and the correct text showed up.
Related
Whenever I call a function that has (enforce-guard some-guard) from X-wallet or Zelcore it always fails with the error Keyset failure (keys-all)
I have no issues when doing this from Chainweaver
How to fix this
This is an issue if you are also providing capabilities with your request.
To fix this, you will need to put enforce-guard within a capability too.
So you will need to do something like
(defcap VERIFY_GUARD (some-guard:guard)
(enforce-guard some-guard)
)
And wherever you would call enforce-guard , you will then need to do
(with-capability (VERIFY_GAURD some-guard)
; Guarded code here
)
Why does this happen?
Chainweaver allows you to select unrestricted signing keys, which provides a key/guard for enforce-guard to work with.
However X-Wallet and Zelcore don't provide this if capabilities present on the request (otherwise they do).
It is probably better practice to add enforce-guard into capabilities anyways and use require-capability in places where you expect the guard to pass.
I want to customize the standard drill-down functionality and add a text parameter to the drill-down URL. I will then parse and use the parameter in the SysStartUpCmdDrillDown or EventDrillDownPoller class like the solution provided by Jan B. Kjeldsen in this question.
The standard drill-down link is dynamics://Target/?DrillDown_RecID/ :
dynamics://0/?DrillDown_5637230378/
In previous versions of AX it was possible to modify the RecId to custom text and parse the text once the client is started:
dynamics://0/?DrillDown_0MenuItemName=PurchTable&FieldName=PurchId&FieldValue=P000044
Unfortunately, in AX 2012 the RecId is checked before the client is started and if it is not a valid int64, the drill-down event is not sent to the client. Since it is not possible to change the RecId to anything other than an integer, #Alex Kwitny suggested in the comments at that same question that you can add the custom text to the drill-down target like this:
dynamics://0MenuItemName=PurchTable/?DrillDown_5637230378/
The problem I experience with this is that the link now gets confused about which instance to start.
If the target is equal to the value in the System Admin -> system parameters -> Alerts ->Drill-down target, a client with the correct server instance is started. When I append the text with my custom text, it always starts the default instance(Which could be different from the instance I intended to start). While this is not ideal, I could work around this issue.
The bigger problem is that it now always starts a new session of the default instance, even if a client session is already started. As far as I can see I cannot write X++ code to solve this issue since the server instance is determined before any code in the client is executed.
My question is this - How can I add custom text to the drill-down link while preserving the way the client instance is started: If a client for the instance is already open, it should process the link in the open client, and not start up a new client of the default instance.
You should probably come up with another solution as mentioned in this post, but there could still be a way.
The URL has two objects that can be modified:
dynamics://[Drill-down target(str)]/?Drilldown_[Int64]
According to you, if you modify the [Drill-down target], then it launches AX using the default client config, and that is behavior that you don't want. If you have a matching [Drill-down target], it'll launch in the open client window, which is behavior I can't confirm, but I'll take it at face value and assume you're correct.
So that means the only thing you can modify in the URL is [int64]. This is actually a string that is converted to an int64 via str2int64(...), which in turn corresponds to a RecId. This is where it gets interesting.
This work all happens in \Classes\SysStartUpCmdDrillDown\infoRun.
Well, lucky for you the ranges for the objects are:
RecId - 0 to 9223372036854775807
Int64 - -9223372036854775808 to 9223372036854775807
You can call minRecId() and maxRecId() to confirm this.
So this means you have -9223372036854775808 to -1 numbers to work with by calling URLs in this range:
dynamics://0/?DrillDown_-1
to
dynamics://0/?DrillDown_-9223372036854775808
Then you would modify \Classes\SysStartUpCmdDrillDown\infoRun to look for negative numbers, and fork to your custom code.
HOW you decide to user these negative #'s is up to you. You can have the first n-digits be a table id or a look-up value for a custom table. You can't technically use a RecId as part of that negative number because in theory the RecId could get up that high (minus 1).
Is there any possible way to check which query is so CPU intensive in _sqlsrv2 process?
Something which give me information about executed query in that process in that moment.
Is there any way to terminate that query without killing _sqlsrv2 process?
I cannot find any official materials in that subject.
Thank You for any help.
You could look into client database-request caching.
Code examples below assume you have ABL access to the environment. If not you will have to use SQL instead but it shouldn't be to hard to "translate" the code below
I haven't used this a lot myself but I wouldn't be surprised if it has some impact on performance.
You need to start caching in the active connection. This can be done in the connection itself or remotely via VST tables (as long as your remote session is connected to the same database) so you need to be able to identify your connections. This can be done via the process ID.
Generally how to enable the caching:
/* "_myconnection" is your current connection. You shouldn't do this */
FIND _myconnection NO-LOCK.
FIND _connect WHERE _connect-usr = _myconnection._MyConn-userid.
/* Start caching */
_connect._Connect-CachingType = 3.
DISPLAY _connect WITH FRAME x1 SIDE-LABELS WIDTH 100 1 COLUMN.
/* End caching */
_connect._Connect-CachingType = 0.
You need to identify your process first, via top or another program.
Then you can do something like:
/* Assuming pid 21966 */
FIND FIRST _connect NO-LOCK WHERE _Connect._Connect-Pid = 21966 NO-ERROR.
IF AVAILABLE _Connect THEN
DISPLAY _connect.
You could also look at the _Connect-Type. It should be 'SQLC' for SQL connections.
FOR EACH _Connect NO-LOCK WHERE _Connect._connect-type = "SQLC":
DISPLAY _connect._connect-type.
END.
Best of all would be to do this in a separate environment. If you can't at least try it in a test environment first.
Here's a good guide.
You can use a Select like this:
select
c."_Connect-type",
c."_Connect-PID" as 'PID',
c."_connect-ipaddress" as 'IP',
c."_Connect-CacheInfo"
from
pub."_connect" c
where
c."_Connect-CacheInfo" is not null
But first you need to enable connection cache, follow this example
I am trying to mine on a private network.
How does one go about creating a genesis block for a private network in frontier ethereum?
I have seen: https://blog.ethereum.org/2015/07/27/final-steps/ but this is to get the public Genesis block.
{
"nonce": "0x0000000000000042",
"difficulty": "0x000000100",
"alloc": {
},
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"timestamp": "0x00",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"gasLimit": "0x16388"
}
You can simple take the generated one here and modify the accounts and balances.
Also put the gas limit to a higher number like 0x2dc6c0 (3mio) and move the difficulty down to 0xb
You can basically create any Genesis Block that you like, as long as it is valid according to the Yellowpaper, 4.3.4. Block Header Validity.
The Genesis Block does not indicate on which Blockchain a miner works. This is defined by connecting to the right peer-to-peer network or, if you are using the discovery mechanism on a network with multiple Blockchains running, using the network ID.
The (Genesis) Block describes the parameters of this specific Block and they are set according to the Miner's algorithm. Of course, any illegal behavior will be rejected by the consensus mechanism.
In conclusion, you can use the same GB for all custom Blockchains.
The values that have to be of correct in terms of mathematical validation are nonce (Proof of Work), mixhash (Fowler–Noll–Vo reduced DAG value set), timestamp (creation time). The geeky values in this example are a copy from the original Frontier release Genesis Block. The parentHash points to the parent block in the chain and the Genesis Block is the only Block where 0 is allowed and required. alloc allows to "pre-fill" accounts with Ether, but that's not needed here since we can mine Ether very quickly.
The difficulty defines the condition to satisfy by the Miner (hash) algorithm to find a valid block. On a test network, it's generally kept small in order to find a block for every iteration. This is useful for testing since needed to execute transactions on the Blockchain. The block generation frequency is kind of the response time of the Blockchain.
The gasLimit is the upper limit of Gas that a transaction can burn. It's inherited into the next Block. extraData is 32 bytes of free text where you can et(h)ernalise smart things on the Blockchain :) The coinbase is the address that got the mining and transaction execution rewards, in Ether, for this Block. It can be 0 here, since it will be set for each new block according to the coinbase of the Miner that found the Block (and added the transactions).
I have documented this a bit more in detail here.
Hope this helps :)
{
"config": {
"chainId":2010,
"homesteadBlock":0,
"eip155Block":0,
"eip158Block":0
},
"gasLimit": "0x8000000",
"difficulty": "0x400",
"alloc": {}
}
Only above Attributes are accepted in Geth version 1.9 (go1.9)
Specifically, the genesis block building for private network is well explained in this short article.
One thing that I want to mention here is that the only difference of the genesis block is that it has no reference to the previous block.
What javascript API calls are needed to set the grade after completing an activity? Now I have these three calls:
LMSSetValue("cmi.core.score.min", 0);
LMSSetValue("cmi.core.score.max", 100);
LMSSetValue("cmi.core.score.raw", score);
I also set the status to completed:
LMSSetValue("cmi.core.lesson_status", "completed");
When I complete the activity as a student, sometimes I can see icon which tells that
activity is completed ("1 attempt(s)"), sometimes not. The gained score is never there.
Desire2Learn is at version 10.1
Not a SCORM expert by any means, but someone here that knows more about it than me makes these points:
You also need to call Commit and Terminate and/or LMSFinish; you can find some good technical resources to help developers at the SCORM website, in case you don't already know about them.
To verify scores and status getting to the Learning Environment, you can check the SCORM reports in the Web UI (Content > Table of Contents > View Report), which is the standard place to view SCORM results.
If scores are set there, you can get them into the grade book in two ways:
You can preview the content topic as an instructor: below the topic view, you'll find a spot to associate a grade item with the topic.
If the DOME configuration variable d2l.Tools.Content.AllowAutoSCORMGradeItem is on for the course, that should automatically create a grade item for that SCORM content object.
As Viktor says, you must invoked LMSCommit after using LMSSetValue, or else the data will not be persisted ('saved') in the LMS.
LMSSetValue("cmi.core.score.min", 0);
LMSSetValue("cmi.core.score.max", 100);
LMSSetValue("cmi.core.score.raw", score);
LMSSetValue("cmi.core.lesson_status", "completed");
LMSCommit(); //save in database
LMSFinish(); //exit course
Note that "LMSSetValue" is not an official SCORM call, it means you're working with a SCORM wrapper of some kind. Therefore where I say LMSCommit and LMSFinish, you might actually need to use different syntax -- I'm just guessing about the function names. Check your SCORM wrapper's documentation. The point is that you need to commit (save) and terminate (finish).