Why is my vector not accumulating the iteration? - julia

I have this type of data that I want to send to a dataframe:
So, I am iterating through it and sending it to a vector. But my vector never keeps the data.
Dv = Vector{Dict}()
for item in reader
push!(Dv,item)
end
length(Dv)
This is what I get:
And I am sure this is the right way to do it. It works in Python:
EDIT
This is the code that I use to access the data that I want to send to a dataframe:
results=pyimport("splunklib.results")
kwargs_oneshot = (earliest_time= "2019-09-07T12:00:00.000-07:00",
latest_time= "2019-09-09T12:00:00.000-07:00",
count=0)
searchquery_oneshot = "search index=iis | lookup geo_BST_ONT longitude as sLongitude, latitude as sLatitude | stats count by featureId | geom geo_BST_ONT allFeatures=True | head 2"
oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot; kwargs_oneshot...)
# Get the results and display them using the ResultsReader
reader = results.ResultsReader(oneshotsearch_results)
for item in reader
println(item)
end

ResultsReader is a streaming reader. This means you will "consume" its elements as you iterate over them. You can covert it to an array with collect. Do not print the items before you collect.
results=pyimport("splunklib.results")
kwargs_oneshot = (earliest_time= "2019-09-07T12:00:00.000-07:00",
latest_time= "2019-09-09T12:00:00.000-07:00",
count=0)
searchquery_oneshot = "search index=iis | lookup geo_BST_ONT longitude as sLongitude, latitude as sLatitude | stats count by featureId | geom geo_BST_ONT allFeatures=True | head 2"
oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot; kwargs_oneshot...)
# Get the results
reader = results.ResultsReader(oneshotsearch_results)
# collect them into an array
Dv = collect(reader)
# Now you can iterate over them without changing the result
for item in Dv
println(item)
end

Related

Kusto: Apply function on multiple column values during bag_unpack

Given a dynamic field, say, milestones, it has value like: {"ta": 1655859586546, "tb": 1655859586646},
How do I print a table with columns like "ta", "tb" etc, with the single row as unixtime_milliseconds_todatetime(tolong(taValue)), unixtime_milliseconds_todatetime(tolong(tbValue)) etc.
I figured that I'll need to write a function that I can call, so I created this:-
let f = view(a:string ){
unixtime_milliseconds_todatetime(tolong(a))
};
I can use this function with a normal column as:- project f(columnName).
However, in this case, its a dynamic field, and the number of items in the list is large, so I do not want to enter the fields manually. This is what I have so far.
log_table
| take 1
| evaluate bag_unpack(milestones, "m_") // This gives me fields as columns
// | project-keep m_* // This would work, if I just wanted the value, however, I want `view(columnValue)
| project-keep f(m_*) // This of course doesn't work, but explains the idea.
Based on the mv-apply operator
// Generate data sample. Not part of the solution.
let log_table = materialize(range record_id from 1 to 10 step 1 | mv-apply range(1, 1 + rand(5), 1) on (summarize milestones = make_bag(pack_dictionary(strcat("t", make_string(to_utf8("a")[0] + toint(rand(26)))), 1600000000000 + rand(60000000000)))));
// Solution Starts here.
log_table
| mv-apply kv = milestones on
(
extend k = tostring(bag_keys(kv)[0])
| extend v = unixtime_milliseconds_todatetime(tolong(kv[k]))
| summarize milestones = make_bag(pack_dictionary(k, v))
)
| evaluate bag_unpack(milestones)
record_id
ta
tb
tc
td
te
tf
tg
th
ti
tk
tl
tm
to
tp
tr
tt
tu
tw
tx
tz
1
2021-07-06T20:24:47.767Z
2
2021-05-09T07:21:08.551Z
2022-07-28T20:57:16.025Z
2022-07-28T14:21:33.656Z
2020-11-09T00:54:39.71Z
2020-12-22T00:30:13.463Z
3
2021-12-07T11:07:39.204Z
2022-05-16T04:33:50.002Z
2021-10-20T12:19:27.222Z
4
2022-01-31T23:24:07.305Z
2021-01-20T17:38:53.21Z
5
2022-04-27T22:41:15.643Z
7
2022-01-22T08:30:08.995Z
2021-09-30T08:58:46.47Z
8
2022-03-14T13:41:10.968Z
2022-03-26T10:45:19.56Z
2022-08-06T16:50:37.003Z
10
2021-03-03T11:02:02.217Z
2021-02-28T09:52:24.327Z
2021-04-09T07:08:06.985Z
2020-12-28T20:18:04.973Z
9
2022-02-17T04:55:35.468Z
6
2022-08-02T14:44:15.414Z
2021-03-24T10:22:36.138Z
2020-12-17T01:14:40.652Z
2022-01-30T12:45:54.28Z
2022-03-31T02:29:43.114Z
Fiddle

Python ArcPy - Print Layer with highest field value

I have some python code that goes through layers in my ArcGIS project and prints out the layer names and their corresponding highest value within the field "SUM_USER_VisitCount".
Output Picture
What I want the code to do is only print out the layer name and SUM_USER_VisitCount field value for the one layer with the absolute highest value.
Desired Output
I have been unable to figure out how to achieve this and can't find anything online either. Can someone help me achieve my desired output?
Sorry if the code layout is a little weird. It got messed up when I pasted it into the "code sample"
Here is my code:
import arcpy
import datetime
from datetime import timedelta
import time
#Document Start Time in-order to calculate Run Time
time1 = time.clock()
#assign project and map frame
p =
arcpy.mp.ArcGISProject(r'E:\arcGIS_Shared\Python\CumulativeHeatMaps.aprx')
m = p.listMaps('Map')[0]
Markets = [3000]
### Centers to loop through
CA_Centers = ['Castro', 'ColeValley', 'Excelsior', 'GlenPark',
'LowerPacificHeights', 'Marina', 'NorthBeach', 'RedwoodCity', 'SanBruno',
'DalyCity']
for Market in Markets:
print(Market)
for CA_Center in CA_Centers:
Layers =
m.listLayers("CumulativeSumWithin{0}_{1}_Jun2018".format(Market,CA_Center))
fields = ['SUM_USER_VisitCount']
for Layer in Layers:
print(Layer)
sqlClause = (None, 'ORDER BY ' + 'SUM_USER_VisitCount') # + 'DESC'
with arcpy.da.SearchCursor(in_table = Layer, field_names = fields,
sql_clause = sqlClause) as searchCursor:
print (max(searchCursor))
You can create a dictonary that stores the results from each query and then print out the highest one at the end.
results_dict = {}
for Market in Markets:
print(Market)
for CA_Center in CA_Centers:
Layers =
m.listLayers("CumulativeSumWithin{0}_{1}_Jun2018".format(Market,CA_Center))
fields = ['SUM_USER_VisitCount']
for Layer in Layers:
print(Layer)
sqlClause = (None, 'ORDER BY ' + 'SUM_USER_VisitCount') # + 'DESC'
with arcpy.da.SearchCursor(in_table = Layer, field_names = fields,
sql_clause = sqlClause) as searchCursor:
print (max(searchCursor))
results_dict[Layer] = max(searchCursor)
# get key for dictionary item with the highest value
highest_count_layer = max(results_dict, key=results_dict.get)
print(highest_count_layer)
print(results_dict[highest_count_layer])

IndexOf() is not working if there is some accent characters in it

I have datatable(dtblCostCategory) which has the following values which i use it in the dropdown. And i have saved some data after selecting the value from the dropdown. When i load the same page again the selected value is not being shown instead the first values is shown in dropdown.
dsOtherDetails
CostCategory | typeId | itemCount
----------------------------------------
Softwaré | 3 | 15
dtblCostCategory
CostCategory | typeId
----------------------------
Electronics | 1
Groceries | 2
Softwaré | 3
cboCategory.DataSource = dtblCostCategory
cboCategory.DataTextField = dtblCostCategory.Columns(1).ToString
cboCategory.DataValueField = dtblCostCategory.Columns(0).ToString
cboCategory.DataBind()
Dim lstItem As New ListItem
lstItem.Text = Server.HtmlEncode(Trim(CStr(dsOthersDetails.Tables(0).Rows(0).Item("CostCategory"))))
lstItem.Value = Server.HtmlEncode(CStr(dsOthersDetails.Tables(0).Rows(0).Item("typeId")))
cboCategory.SelectedIndex = cboCategory.Items.IndexOf(lstItem)
In the above code I have used indexOf to get the selected index by comparing the values from two tables. As there is accent in category (Softwaré) the indexOf is not working properly. Is there way where I can get selected index ignoring the accents so that the drop down will have correct selected value.
Try with FindByValue()
cboCategory.SelectedIndex = cboCategory.Items.IndexOf(cboCategory.Items.FindByValue(lstItem.Value));

IndexError: list index out of range, scores.append( (fields[0], fields[1]))

I'm trying to read a file and put contents in a list. I have done this mnay times before and it has worked but this time it throws back the error "list index out of range".
the code is:
with open("File.txt") as f:
scores = []
for line in f:
fields = line.split()
scores.append( (fields[0], fields[1]))
print(scores)
The text file is in the format;
Alpha:[0, 1]
Bravo:[0, 0]
Charlie:[60, 8, 901]
Foxtrot:[0]
I cant see why it is giving me this problem. Is it because I have more than one value for each item? Or is it the fact that I have a colon in my text file?
How can I get around this problem?
Thanks
If I understand you well this code will print you desired result:
import re
with open("File.txt") as f:
# Let's make dictionary for scores {name:scores}.
scores = {}
# Define regular expressin to parse team name and team scores from line.
patternScore = '\[([^\]]+)\]'
patternName = '(.*):'
for line in f:
# Find value for team name and its scores.
fields = re.search(patternScore, line).groups()[0].split(', ')
name = re.search(patternName, line).groups()[0]
# Update dictionary with new value.
scores[name] = fields
# Print output first goes first element of keyValue in dict then goes keyName
for key in scores:
print (scores[key][0] + ':' + key)
You will recieve following output:
60:Charlie
0:Alpha
0:Bravo
0:Foxtrot

lumen works cssv reader not reads the values starting with # symbol

i have a code snippet in c# which helps in reading the csv file in to list .the problem is it is not reading the record which starts with # symbol
for instance if i have two record like this, then only sderik record i taken and the other record is missing as it starts with # symbol. what coul be the reason?
sderik | sample1 | sample 2| sample 3
#smissingrecord | sample1 | sample 2| sample 3
using (LumenWorks.Framework.IO.Csv.CsvReader csv = new LumenWorks.Framework.IO.Csv.CsvReader(reader, true,'|'))
{
outDataTable = Common.CommonFunction.ConvertListToDataTable(csv.ToList());
retValue = true;
}
The # is the default comment character. Override it by including a 6th parameter to the CsvReader call that is not '#'.

Resources