I have a field in my acore_characters table named 'rank' with a tinyint which ranges from 0 to 3 inclusive, based on player's progression. I need to read that value at both login and at certain specific circumstances.
I wrote the following PreparedStatement: "SELECT rank FROM acore_characters WHERE guid = ?" and then the code which is supposed to read that value:
uint16 GetCharactersRank(uint64 guid) {
PreparedStatement* stmt = CharacterDatabase.GetPreparedStatement(mystatement);
stmt->setUInt32(0, GetGUID());
PreparedQueryResult result = CharacterDatabase.Query(stmt);
if (result) {
[...truncated]
and then fetching the result and so on, but then I get a code C8361 when compiling because 'GetGUID':identifier not found in Player.cpp file...what goes wrong? The other GetGUID calls throughout the file dont give this result. I'm not very fond of c++, any help is very appreciated.
It's not recommended to directly patch the core to add customisations to it. Instead, use modules.
An example can be found here: Is it possible to turn a core patch into a module for AzerothCore?
You can have a look and copy the skeleton-module and start modifying it to create your own.
In your case, you probably want to use the OnLogin player hook.
Related
I have an enumerand of around 150 entries, which I need to get into IBM Rhapsody.
Doing this by hand is clearly lengthy and error prone. I have google extensively but found only things that tell me how to edit the generated code -- not go the other way.
The question is: How is this done? And if there is no way -- please someone post that as an answer.
David,
I would jump into the Java API (plugin subsystem) and do it that way. If you haven't learned how to use the API, there is a bit of a learning curve. There are two ways to go about it: Implement a Java (or your favorite JVM language--I use Scala) app that realizes the Rhapsody Plugin framework and then you choose to package it up and deploy it so that it gets loaded when you load your model, or, if it is a one off job, do everything up to the point of packaging it up and then run it from within your IDE and you are done. If you are comfortable with Scala, I can post some code.
So what I did in the end was I edited the relevant .sbs file, used a small python program to generate the items I required, and then update the length of the array accordingly.
all_the_literals = ["enum_name = 0x4e", enum_name2 = 0xF2", ... ,]
for field1, waste, field1_value in map(lambda x: x.split(" "),
all_the_literals):
literal_string = f""" {{ IEnumerationLiteral
- _id = GUID {uuid.uuid4()};
- _name = \"{field1}\";
- codeUpdateCGTime = 5.16.2022::19:24:18;
- _modifiedTimeWeak = 5.16.2022::19:24:18;
- _value = \"{field1_value}\";
}}"""
print(literal_string)
Note the above "code" snippet purely prints the items, which you then copy-paste into the relevant field in the sbs file. YMMV -- this was the correct format for an enum in Rhapsody (and note how I fudged the update time, but it worked successfully, so you'll need to do the same if you use this answer).
Also note it's probably better to use bauhaus9's answer, but I definitely didn't have time for it.
Two questions
How to check file exists or not before EXTRACT?
we have scenario where new inputs file is generated every day for catalog data. we need to merge new input with d-1 file. before merge we what to make sure that new input file exists at source location
does u-sql supports try...catch block?
Regarding checking if a file exists. We recently released a compile-time IF statement that indeed can check for partition existence (and other objects such as files and tables are on the roadmap).
Once that feature is released (still one or two refreshs out at the time of this answer) it may look something like (syntax subject to change):
IF FILE.EXISTS("/mydir/myfile.csv") THEN
#data = EXTRACT ... FROM "/mydir/myfile.csv" USING ...;
...
#jobstate = SELECT * FROM (VALUES("job completed")) AS T(status);
ELSE
#jobstate = SELECT * FROM (VALUES("file not ready. Job not executed.")) AS T(status);
END;
OUTPUT #jobstate TO "/jobs/myjobstate.csv" USING Outputters.Csv();
You will be able to provide the name as a parameter as well. Please let me know if that will work for your scenario.
An other alternative is to use the file set syntax, especially if you want to use a dynamic value to determine the process. That would simply create an empty rowset:
#data = EXTRACT ..., date DateTime
FROM "/mydir/{date:yyyy}/{date:MM}/{date:dd}/data.csv"
USING ...;
#data = SELECT * FROM #data WHERE date == DateTime.Now.AddDays(-1);
... // continue processing #data that is empty if yesterday's file is not yet there
Having said that, you may want to check of your job orchestration framework (such as ADF) may be a better place to check for existence before submitting the job in the first place.
As to the try catch block: U-SQL itself is a script-level optimizable, declarative language where the plan gets generated and optimized at runtime over the whole script. Thus providing a dynamic TRY-CATCH is currently not available, since it would severely impact the ability to optimize the script (e.g., you cannot move predicates or column pruning outside of a try-catch block). Also TRY/CATCH can lead to some very hard to understand and debug code, especially if it is used to mimic procedural workflows in an otherwise declarative environment.
However, you can use try/catch inside your C# functions without problems if you need to catch C# runtime errors.
FILE.EXISTS() always returns True when executed locally. However, it works when executing against Azure Data Lake.
Tried MSDN example and the following returns True, True
DECLARE #filepath_good = "/Samples/Data/SearchLog.tsv";
DECLARE #filepath_bad = "/Samples/Data/zzz.tsv";
#result =
SELECT FILE.EXISTS(#filepath_good) AS exists_good,
FILE.EXISTS(#filepath_bad) AS exists_bad
FROM (VALUES (1)) AS T(dummy);
OUTPUT #result
TO "/Output/FileExists.txt"
USING Outputters.Csv();
I have Microsoft Azure Data Lake Tools for Visual Studio version 2.2.5000.0
I am trying to add a new premake5 field in my premake5 script but am having trouble understanding how to specify the field.kind .
I want to add a field that contains a list of (path, string ) pairs but can't work out how to specify the kind spec .
The documentation and examples are not particularly clear.
This is how I have registered my new field
premake.api.register( {
name = "mypathmappings",
scope = "config",
kind = "list:path:string", or "list:keyed:path:string"
}
)
and inside of a config scope I declare the field item like so
project myproject
mypathmappings { ["path/to/file1"] = "stringvalue1", ["path/to/file2"] = "stringvalue2"}
However when it comes to processing time I don't get what I'm expecting in the field:
function processpathmappings(cfg)
local pathmappings = cfg.mypathmappings
for path, value in pairs(pathmappings) do
--do something with the path and value but
--value is not a string as expected
end
end
Can someone explain how the complex kinds can be built up correctly from the field kinds registered in api.lua?
I get that "list:integer" specifies a list of integers but don't know how the "keyed" element works for example.
Right now, it is not possible to control the "kind" of the keys in a keyed value. The best you will be able to get with the current system is kind="keyed:string", which should give you the values (the strings) that you want, but the paths will not be processed by Premake (made absolute, etc.)
If it is feasible, you might want to flip it around to kind="keyed:path" and set the values like this:
mypathmappings { ["stringvalue1"] = "path/to/file1" }
But that relies on your string values being unique within a map.
In theory, Premake's field API could be extended to support kinds of keys; feel free to open a ticket or submit a pull request.
This code is from a program I use to enter and track information. Numbers are entered as work orders (WO) to track clients. But one of the tables is duplicating the WO information. So I was trying to figure out a general outline of what this code is saying so that the problem can be fixed.
Here is the original line:
wc.dll?x3~emproc~datarecord~&ACTION=DISPLAY&TABLE+WORK&KEYVALUE=<%work.wo%&KEYFIELD=WO
What I think I understand of it so far, and I could be very wrong, is:
wc.dll?x3~emproc~datarecord~&ACTION
//No clue because don't know ~ means or using & (connects?Action)
=DISPLAY&TABLE+WORK&KEYVALUE
//Display of contents(what makes it pretty) and the value inside the table at that slot
=<work.wo%&KEYFIELD
//Calling the work from the WO object
=WO
//is Assigning whatever was placed into the WO field into the left side of the statement
I'll do my best to interpret the statement, with the limited information you've provided:
wc.dll is an instruction to invoke a DLL
? is introducing a list of parameters (like a query string in HTTP)
x3~emproc~datarecord~ seems like a reference to a function in the dll
& parameter separator
ACTION=DISPLAY set the ACTION parameter to the value DISPLAY
TABLE+WORK perhaps sets a couple of flags
KEYVALUE=<%work.wo% set the KEYVALUE parameter to the value of <%work.wo%
KEYFIELD=WO set the KEYFIELD parameter to the value WO
Hope that helps.
I am trying to extract requirements from QC Requirement module. i could extract all requirements of a QC project but i would like to extract selected requirements only. So i need to give folder path and extract requirements accordingly.
Currently i use ReqFactory to extract Reqs from QC. Could you please help me or give me idea to extract requirmeents from selected folder path.
I tried Req Path and father id, but still it does not fulfill my need as some may have multiple sub folders under parent folders.
I assume you like to get all the child requirements of a requirement using the OTA API? The only solution I can offer is a bit clumsy. First you have to get the requirement where you want to start, e.g. "Requirements\Projects\ProjectX". How to achieve that is described in the OTA API Reference as an example of the ReqFactory object ("Find a specified requirement in a specified folder"). Or it is posted in this forum. If you know the ID of the start-requirement you can simply get the requirement with req_factory.Item(id).
When you have your requirement where you want to start, you can use the Find-method of the ReqFactory to get all its children, resp. all Requirement objects starting with the same path as the start-requirement. Here is an example-method in Ruby:
def list_all_child_requirements(start_req)
req_factory = #tdc.ReqFactory
req_path_strange_format = start_req.Field("RQ_REQ_PATH")
child_req_list = req_factory.Find(start_req.ID, "RQ_REQ_PATH", req_path_strange_format, 8)
child_req_list.each do |list_req|
puts list_req
end
end
The req_path_strange_format contains a String in the strange Quality Center notation like "AAAAAB". The Find-method starts from the start-requirement and searches all requirements which path starts with the same path as the path of the start-requirement. The parameter 8 means "starts with pattern" (described in the API Reference, Enum tagTDAPI_REQMODE). I just don't know how to access the Enum using Ruby, thats why the magic 8 is used... The Find-method returns a list with format "ID,NAME". From there it should be no problem to extract the requirements.
Doing the same directly in QC with a VAPI-XP-TEST and VB looks like that:
TDOutput.Clear
Dim reqPathStrangeFormat
Set reqF = tdConnection.ReqFactory
Set startReq = reqF.Item(14) ' ID of parent requirement
reqPathStrangeFormat = startReq.Field("RQ_REQ_PATH")
TDOutput.Print reqPathStrangeFormat
Set childReqList = reqF.Find(startReq.ID, "RQ_REQ_PATH", reqPathStrangeFormat, TDREQMODE_FIND_START_WITH)
For Each childReq in childReqList
TDOutput.Print childReq
Next
This code first prints some strange string "AAAAAB" or something similiar, then a list with "ID,NAME" of the requirements.