I want to retrieve the list of packages during the promote, e.g. when promote from DEV to QA, and get the list of file inside the package. What's these two commands?
Are you using promote process from workbench.
Consider using system variables as post linked process in the promote process
[package] [version]
I am assuming you are executing promote process on package group
This will provide the package(s) list and the versions in these packages.
If you need more details ,please reach out to us in CA harvest communities where visibility is high
https://communities.ca.com/community/ca-harvest
Regards,
Balakrishna
When promoting from dev to qa use the post link process
for example:
scriptName "[project]" "[state]"
In the server put the script (including the select as following )
select distinct c.PACKAGENAME , e.ITEMNAME, g.USERNAME, d.MAPPEDVERSION VERSION, f.PATHFULLNAME
from HARSTATE a, HARENVIRONMENT b, HARPACKAGE c, HARVERSIONs d, HARITEMs e, HARPATHFULLNAME f, HARUSER g
where b.ENVOBJID = a.ENVOBJID
and a.STATEOBJID = c.STATEOBJID
and b.ENVIRONMENTNAME = '${Project}'
and a.STATENAME = '${state}'
and c.PACKAGEOBJID = d.PACKAGEOBJID
and d.ITEMOBJID = e.ITEMOBJID
and e.PArentobjid = f.itemOBJID and e.itemtype <> 0
and g.USROBJID=c.CREATORID
and c.packagename != 'BASE'
order by c.packagename , f.pathfullname
Related
I'm a novice and am working on learning the basics. I have no experience in coding.
I'm trying to make a world for myself and my friends that features quest rewards being the sole way to gain levels, and in full increments - exactly like milestone leveling in Dungeons and Dragons.
Is there a way to level up a character, or have an automated ".levelup" command be used on a character, triggering when that player completes a (custom) quest? Additionally, is this something that can be done in Keira3? Or will I need to use other tools?
I've tried granting quest reward consumables that use the spells 47292 and 24312 (https://wotlkdb.com/?spell=47292 and https://wotlkdb.com/?spell=24312) but those appear to just be the visual level-up effects.
There are multiple ways to achieve this. The most convenient way that I can think of, is to compile the core with the eluna module: https://github.com/azerothcore/mod-eluna
It allows for scripting with the easily accessible Lua language. For example, you can use the following code:
local questId = 12345
local questNpc = 23456
local maxLevel = 80
local CREATURE_EVENT_ON_QUEST_REWARD = 34 --(event, player, creature, quest, opt) - Can return true
local function MyQuestCompleted(event, player, creature, quest, opt)
if player then -- check if the player exists
if player:GetLevel() < maxLevel and quest = questId then -- check if the player has completed the right quest and isn't max level
player:SetLevel( player:GetLevel() + 1 )
end
end
end
RegisterCreatureEvent( questNpc , CREATURE_EVENT_ON_QUEST_REWARD , MyQuestCompleted)
See https://www.azerothcore.org/pages/eluna/index.html for the documentation.
I have a few thousand of video files in my BlobStorage, which I set it as a datastore.
This blob storage receives new files every night and I need to split the data and register each split as a new version of AzureML Dataset.
This is how I do the data split, simply getting the blob paths and splitting them.
container_client = ContainerClient.from_connection_string(AZ_CONN_STR,'keymoments-clips')
blobs = container_client.list_blobs('soccer')
blobs = map(lambda x: Path(x['name']), blobs)
train_set, test_set = get_train_test(blobs, 0.75, 3, class_subset={'goal', 'hitWoodwork', 'penalty', 'redCard', 'contentiousRefereeDecision'})
valid_set, test_set = split_data(test_set, 0.5, 3)
train_set, test_set, valid_set are just nx2 numpy arrays containing blob storage path and class.
Here is when I try to create a new version of my Dataset:
datastore = Datastore.get(workspace, 'clips_datastore')
dataset_train = Dataset.File.from_files([(datastore, b) for b, _ in train_set[:4]], validate=True, partition_format='**/{class_label}/*.mp4')
dataset_train.register(workspace, 'train_video_clips', create_new_version=True)
How is it possible that the Dataset creation seems to hang for an indefinite time even with only 4 paths?
I saw in the doc that providing a list of Tuple[datastore, path] is perfectly fine. Do you know why?
Thanks
Do you have your Azure Machine Learning Workspace and your Azure Storage Account in different Azure Regions? If that's true, latency may be a contributing factor with validate=True.
Another possibility may be slowness in the way datastore paths are resolved. This is an area where improvements are being worked on.
As an experiment, could you try creating the dataset using a url instead of datastore? Let us know if that makes a difference to performance, and whether it can unblock your current issue in the short term.
Something like this:
dataset_train = Dataset.File.from_files(path="https://bloburl/**/*.mp4?accesstoken", validate=True, partition_format='**/{class_label}/*.mp4')
dataset_train.register(workspace, 'train_video_clips', create_new_version=True)
I'd be interested to see what happens if you run the dataset creation code twice in the same notebook/script. Is it faster the second time? I ask because it might be an issue with the .NET core runtime startup (which would only happen on the first time you run the code)
EDIT 9/16/20
While it doesn't seem to make sense that .NET core invoked when not data is moving, is suspect it is the validate=True part of the param that requires that all the data be inspected (which can computationally expensive). I'd be interested to see what happens if that param is False
For a project I am creating different layers which should all be written into one geopackage.
I am using QGIS 3.16.1 and the Python console inside QGIS which runs on Python 3.7
I tried many things but cannot figure out how to do this. This is what I used so far.
vl = QgsVectorLayer("Point", "points1", "memory")
vl2 = QgsVectorLayer("Point", "points2", "memory")
pr = vl.dataProvider()
pr.addAttributes([QgsField("DayID", QVariant.Int), QgsField("distance", QVariant.Double)])
vl.updateFields()
f = QgsFeature()
for x in range(len(tag_temp)):
f.setGeometry(QgsGeometry.fromPointXY(QgsPointXY(lon[x],lat[x])))
f.setAttributes([dayID[x], distance[x]])
pr.addFeature(f)
vl.updateExtents()
# I'll do the same for vl2 but with other data
uri ="D:/Documents/QGIS/test.gpkg"
options = QgsVectorFileWriter.SaveVectorOptions()
context = QgsProject.instance().transformContext()
QgsVectorFileWriter.writeAsVectorFormatV2(vl1,uri,context,options)
QgsVectorFileWriter.writeAsVectorFormatV2(vl2,uri,context,options)
Problem is that the in the 'test.gpkg' a layer is created called 'test' and not 'points1' or 'points2'.
And the second QgsVectorFileWriter.writeAsVectorFormatV2() also overwrites the output of the first one instead of appending the layer into the existing geopackage.
I also tried to create single .geopackages and then use 'Package Layers' processing tool (processing.run("native:package") to merge all layers into one geopackage, but then the attributes types are all converted into strings unfortunately.
Any help is much appreciated. Many thanks in advance.
You need to change the SaveVectorOptions, in particular the mode of actionOnExistingFile after creating the gpkg file :
options = QgsVectorFileWriter.SaveVectorOptions()
#options.driverName = "GPKG"
options.layerName = v1.name()
QgsVectorFileWriter.writeAsVectorFormatV2(v1,uri,context,options)
#switch mode to append layer instead of overwriting the file
options.actionOnExistingFile = QgsVectorFileWriter.CreateOrOverwriteLayer
options.layerName = v2.name()
QgsVectorFileWriter.writeAsVectorFormatV2(v2,uri,context,options)
The documentation is here : SaveVectorOptions
I also tried to create single .geopackages and then use 'Package Layers' processing tool (processing.run("native:package") to merge all layers into one geopackage, but then the attributes types are all converted into strings unfortunately.
This is definitively the recommended way, please consider reporting the bug
Problem: I need to control the order of execution in which tasks are processed in parallel by a foreach loop. Unfortunately, this is not supported by foreach.
Solution in mind: Using doRedis to use the database to hold all tasks, that are executed in the foreach loop. To control the order I want to overwrite getTask by setGetTask to get the tasks based on pre-specified order. Though I could not find to much documentation on how to do this.
Additional Information:
There is a small paragraph on setGetTask with an example in the redis documentation.
getTask <- function ( queue , job_id , ...)
{
key <- sprintf("
redisEval("local x=redis.call('hkeys',KEYS[1])[1];
if x==nil then return nil end;
local ans=redis.call('hget',KEYS[1],x);
redis.call('hdel',KEYS[1],x);i
return ans",key)
}
setGetTask(getTask)
I though think the code in the documentation is syntactically not correct (missing imho a " and a closing bracket ")"). I thought this is not possible on CRAN, as the code for the documentation is executed on submission.
Changing the getTask function does not change anything in regard of the workers getting tasks (even if introducing obvious non-sense into the redisEval like changing it to redisEval("dddddddddd(((")
I only had access to the setGetTask function after installing the package from source (which I downloaded from the official CRAN package page of version 1.1.1 (which imho should make no difference than installing it directly from CRAN)
Data: The Dataframe of tasks to execute looks the following:
taskName;taskQueuePosition;parameter1;paramterN
taskT;1;val1;10
taskK;2;val2;8
taskP;3;val3;7
taskA;4;val4;7
I want to use 'taskQueuePosition' to control the order, tasks with lower numbers should be executed first.
Questions:
Does anybody know any sources where I can get more information on doing this with doRedis or on setGetTask?
Does anybody know how I need to change getTask to achieve the above described?
Any other smart ideas to control the order of execution in a foreach loop? Preferably so that at some point I can use doRedis as parallel back end (changing this would mean a major change in the processing due to complicated technical infrastructure reasons).
Code (for easy reproduction):
The following assumes that the redis-server is started on the local machine.
Redis DB Filling:
library(doRedis)
library(foreach)
options('redis:num'=TRUE) # needed for proper execution
REDIS_JOB_QUEUE = "jobs"
registerDoRedis(REDIS_JOB_QUEUE)
# filling up the data frame
taskDF = data.frame(taskName=c("taskT","taskK","taskP","taskA"),
taskQueuePosition=c(1,2,3,4),
parameter1=c("val1","val2","val3","val4"),
parameterN=c(10,8,7,7))
foreach(currTask=iter(taskDF, by='row'),
.verbose = T
) %dopar% {
print(paste("Executing task: ",currTask$taskName))
Sys.sleep(currTask$parameterN)
}
removeQueue(REDIS_JOB_QUEUE)
Worker:
library(doRedis)
REDIS_JOB_QUEUE = "jobs"
startLocalWorkers(n=1, queue=REDIS_JOB_QUEUE)
I could solve the problem and now can control the order of task execution.
Additional information:
1. There seems to be a typo in the documentation, that renders the getTask example not working. By considering the form of the default_getTask function from the file task.R in the package, it should look probably something like:
getTaskDefault <- function ( queue , job_id , ...)
{
key <- sprintf("%s:%s",queue, job_id)
return(redisEval("local x=redis.call('hkeys',KEYS[1])[1];
if x==nil then return nil end;
local ans=redis.call('hget',KEYS[1],x);
redis.call('set', KEYS[1] .. '.start.' .. x, x);
redis.call('hdel',KEYS[1],x);
return ans",key))
}
It seems that the letters behind first percent sign in the first line of the function got lost. This would explain the uneven number of brackets and quotes.
2) setGetTask still does not have any effect for me. When I set the getTask function though through .option while the DB is filled (like it is described in the vignette of the package) it is successfully called.
3) The information on 2) means that I do not need the getTask function, so I can use the package from CRAN.
----- Questions -----
1) The doRedis vignette describes how a custom getTask can be successfully set.
2 and 3) When the LUA script in getTask function is modified like below, the tasks are drawn from the database in the way they are submitted. This is not exactly what I was asking for, but due to time restraints and the fact I have (or better had) not the first idea about LUA script, it is imho a satisfying solution to control the order of submission by the taskQueuePosition column.
getTaskInOrder <- function ( queue , job_id , ...)
{
key <- sprintf("%s:%s",queue, job_id)
return(redisEval("
local tasks=redis.call('hkeys',KEYS[1]); -- get all tasks
local x=tasks[1]; -- get first task available task
if x==nil then -- if there are no tasks left, stop processing
return nil
end;
local xMin = 65535; -- if we have more tasks than 65535, getting the
-- task with the lowest taskID is not guaranteed to be the first one
local i = 1;
-- local iMinFound = -1;
while (x ~= nil) do -- search the array until there are no tasks left
-- print('x: ',x)
local xNum = tonumber(x);
if(xNum<xMin) then
xMin = xNum;
-- iMinFound = i;
end
i=i+1;
-- print('i is now: ',i);
x=tasks[i];
end
-- print('Minimum is task number',xMin,' found at i ', iMinFound)
x=tostring(xMin) -- convert it back to a string (maybe it would
-- be better to keep the original string somewhere,
-- in case we loose some information whilst converting to number)
-- print('x is now:',x);
-- print(KEYS[1] .. '.start.' .. x, x);
-- print('');
local ans=redis.call('hget',KEYS[1],x);
redis.call('set', KEYS[1] .. '.start.' .. x, x);
redis.call('hdel',KEYS[1],x);
return ans",key))
}
Important note: I noticed that if a task is aborted, the order is screwed up and the resubmitted task (even though the task number remains the same), will be executed after the originally submitted tasks. This is okay for me.
------ Code (for easy reproduction):------
This leads to the following code example (with 12 entries in the task data frame, instead the original 4):
Redis DB Filling:
library(doRedis)
library(foreach)
options('redis:num'=TRUE) # needed for proper execution
REDIS_JOB_QUEUE = "jobs"
getTaskInOrder <- function ( queue , job_id , ...)
{
...like above
}
registerDoRedis(REDIS_JOB_QUEUE)
# filling up the data frame already in order of tasks to be executed
# otherwise the dataframe has to be sorted by taskQueuePosition
taskDF = data.frame(taskName=c("taskA","taskB","taskC","taskD","taskE","taskF","taskG","taskH","taskI","taskJ","taskK","taskL"),
taskQueuePosition=c(1,2,3,4,5,6,7,8,9,10,11,12),
parameter1=c("val1","val2","val3","val4","val1","val2","val3","val4","val1","val2","val3","val4"),
parameterN=c(5,5,5,4,4,4,4,3,3,3,2,2))
foreach(currTask=iter(taskDF, by='row'),
.verbose = T,
.options.redis = list(getTask = getTaskInOrder
) %dopar% {
print(paste("Executing task: ",currTask$taskName))
Sys.sleep(currTask$parameterN)
}
removeQueue(REDIS_JOB_QUEUE)
Worker:
library(doRedis)
REDIS_JOB_QUEUE = "jobs"
startLocalWorkers(n=1, queue=REDIS_JOB_QUEUE)
Another note: just in case you are processing long jobs, as I do, please notice a bug in redis 1.1.1 (the current version on CRAN), which leads to tasks being resubmitted (due to a timeout) despite the workers still working on them.
I am trying to write a manual rate-limiting function for the rgithub package. So far this is what I have:
library(rgithub)
pull <- function(i){
commits <- get.pull.request.commits(owner = owner, repo = repo, id = i, ctx = get.github.context(), per_page=100)
links <- digest_header_links(commits)
number_of_pages <- links[2,]$page
if (number_of_pages != 0)
try_default(for (n in 1:number_of_pages){
if (as.integer(commits$headers$`x-ratelimit-remaining`) < 5)
Sys.sleep(as.integer(commits$headers$`x-ratelimit-reset`)-as.POSIXct(Sys.time()) %>% as.integer())
else
get.pull.request.commits(owner = owner, repo = repo, id = i, ctx = get.github.context(), per_page=100, page = n)
}, default = NULL)
else
return(commits)
}
list <- c(500, 501, 502)
pull_lists <- lapply(list, pull)
The intention i that if the x-ratelimit-remaining variable goes below a certain threshold the script should wait until the time specified in x-ratelimit-reset has passed, and then continue the script. However, I'm not sure if this is the actual behavior of the if else set up that I have here.
The function runs fine, but I have some doubts about whether it actually does the rate limiting or whether it somehow skips that steps. Hence I ask: a) how can I find out if it actually does rate-limiting, and b) if not, how can I rewrite it so that it actually does rate limiting? Would a while condition/loop perhaps be better?
You can test if it does the rate limiting changing 5 to a large enough number and adding a display of the timing of Sys.sleep using:
print(system.time(Sys.sleep(...)))
That said, the function seems ok to me, unfortunately I cannot test it easily as rgithub is not available for my version of R (3.1.3).
Not a canonical answer, but some working example.
You should add some logging in your script, even kind of write.csv(append=TRUE).
I've implemented automatic antiddos process which prevent your ip to be banned by the exchange market. You can find it jangorecki/Rbitcoin/R/utils.R.
Rbitcoin.last_api_call is env object stored in package namespace, kind of session package cache.
This can help you with setting it in your package.
You should also consider a optional parallel supported version. Linking to database with concurrency read. My function can be easy modified to queue call and recheck timing every X seconds.
Edit
I forget to add that mentioned function support multiple source systems. That allows for example to extend your rgithub for bitbucket, etc. and still effectively manage API rate limiting.