select xml node unable to pick out specific data - asp-classic

<B:CreditVehicle>
<B:Model>Accord Sedan</B:Model>
<B:ModelYear>2014</B:ModelYear>
<B:ModelDescription>CR2F3EEW</B:ModelDescription>
<B:Make>Honda</B:Make>
<B:SaleClass>Used</B:SaleClass>
<B:Condition>CLEAN</B:Condition>
<B:CertifiedPreownedInd>0</B:CertifiedPreownedInd>
<B:VehicleNote>4dr I4 CVT LX</B:VehicleNote>
<B:VIN>1HGCR2F37EA078469</B:VIN>
<B:DeliveryMileage uom="M">101550</B:DeliveryMileage>
<B:VehicleDemoInd>0</B:VehicleDemoInd>
<B:Pricing>
<B:VehiclePrice currency="USD">11100</B:VehiclePrice>
<B:VehiclePricingType>MSRP</B:VehiclePricingType>
</B:Pricing>
<B:Pricing>
<B:VehiclePrice currency="USD">12149</B:VehiclePrice>
<B:VehiclePricingType>Selling Price</B:VehiclePricingType>
</B:Pricing>
<B:Pricing>
<B:VehiclePrice currency="USD">8925</B:VehiclePrice>
<B:VehiclePricingType>Wholesale Price</B:VehiclePricingType>
</B:Pricing>
<B:CollateralType>1</B:CollateralType>
<B:AuctionInd>0</B:AuctionInd>
<B:VehicleUse>7</B:VehicleUse>
</B:CreditVehicle>
In the above XML file I am tryign to cherry pick the "Wholesale Price" value of "8925". I was trying to do so by selecting a single node, but I have not been able to.
This is what I was trying to do:
Set Node = xmlDoc.documentElement.selectSingleNode("D:Body/B:ProcessCreditApplication/B:DataArea/B:CreditApplication/B:Detail/B:CreditVehicle/B:Pricing[#VehiclePricingType='Wholesale Price']/B:VehiclePrice")
I have been able to get values using the "selectsinglenode" option, but perhaps this structure of the XML makes it not possible to reach.
Anyone have an idea or experience to do it in a simple manner?

Related

Datastage Sequence job- how to process each file at a time if those files are in 7 different folders

DataStage - There are 7 folders in a path and in each folder there are 2 files . for eg : the 2 files are in the folllowing format- filename = test_s1_YYYYMMDD.txt, test_s1_YYYYMMDD.done. The path for these files are user/test/test_s1/
user/test/test_s2/
...
...
..
user/test/test_s7/------here s1,s2...s7 represents the different folders
In these folders the 2 above mentioned files are present , so how can i process each file in a sequence job?
First you need a job to process a file and the filename needs to be a parameter of that job.
For the Sequence level you need two levels - the inner one for the two files within each folder and a outer one for the different directories.
For the inner one you can choose to build a loop with to iterations or simply add the processing job twice to the sequence (which will reduce complexity in case it will always be two files).
The outer Sequence is a loop where you could parameterize the path in a way that the loop counter could be used to generate your 1-7 flexible path addon.
Check out more details on loops here
You can use the loop counter (stage_label.$Counter) to parameterize your job.
Depending on what you want to do with the files, it is an important decision how to process your files. Starting a job (or more) in a sequence for each file can lead to heavy overhead for just starting the jobs. Try loading all files at once in a parallel job using the sequenial file stage.
In the Sequential File Stage, set the appropriate Format. You can also set everything to none to just put each row in one column and process that in a later job. This will make the reading very flexible and forgiving. If your files are all the same structure, define your columns as needed.
To select the files, use File Patterns. In the Options of the Sequential File Stage, choose to have a File Name Column so you can process the filenames in a later job. You might also want to add a Row Number Column.
This method works pretty fast.

How to combine many records value into one record

As you can see from the below picture I was able to combine two deals (blocked red) but the output should have one result instead of two. If anyone has any solutions on this please advise.
The red blocked component has more than one record, each record has an amount, the sum of all record amount must be shown in a single row.
record1: Amount:100
record2: Amount:200
record3: Amount:500
Merge of all records is following
record: Amount:800
Is it possible to merge many rows into a single row in integromat?
Based on your screenshot you aggregate an incorrect module. Source module in your aggregator has to be set to a module that generates multiple modules, in your case, it is module 10.
You aggregate module 14 that generates for every input module a single output module, there is nothing to aggregate. Module 10 returns for a single input 2 bundles.
Your case:
/---[6]---([14]---[11 aggregator])---
---[10] multiple output bundles
\---[6]---([14]---[11 aggregator])---
Solution:
/---[6]---[14]---\
---([10] [11 aggregator])--- single output bundle
\---[6]---[14]---/
Your scenario has to look like this (Aggregator: Source module = module no.10):

Neo4J and Cypher query

I am new to Neo4j and Cypher query.My create query is like each Shop has 2 chillers which has 2 PLCs each which in turn has 2 sensors each.
The create is as below
Create(:SHOP{name:"Shop1"})-[:hasChiller]->(:CHILLER{name:"Chiller1"})
Create(:SHOP{name:"Shop1"})-[:hasChiller]->(:CHILLER{name:"Chiller2"})
Create(:SHOP{name:"Shop2"})-[:hasChiller]->(:CHILLER{name:"Chiller3"})
Create(:SHOP{name:"Shop2"})-[:hasChiller]->(:CHILLER{name:"Chiller4"})
Create(:CHILLER{name:"Chiller1"})-[:hasPLC]->(:PLC{name:"Plc1"})
Create(:CHILLER{name:"Chiller1"})-[:hasPLC]->(:PLC{name:"Plc2"})
Create(:CHILLER{name:"Chiller2"})-[:hasPLC]->(:PLC{name:"Plc3"})
Create(:CHILLER{name:"Chiller2"})-[:hasPLC]->(:PLC{name:"Plc4"})
Create(:CHILLER{name:"Chiller3"})-[:hasPLC]->(:PLC{name:"Plc5"})
Create(:CHILLER{name:"Chiller3"})-[:hasPLC]->(:PLC{name:"Plc6"})
Create(:CHILLER{name:"Chiller4"})-[:hasPLC]->(:PLC{name:"Plc7"})
Create(:CHILLER{name:"Chiller4"})-[:hasPLC]->(:PLC{name:"Plc8"})
Create(:PLC{name:"Plc1"})-[:hasSensor]->(:SENSOR{name:"Sensor1"})
Create(:PLC{name:"Plc1"})-[:hasSensor]->(:SENSOR{name:"Sensor2"})
Create(:PLC{name:"Plc2"})-[:hasSensor]->(:SENSOR{name:"Sensor3"})
Create(:PLC{name:"Plc2"})-[:hasSensor]->(:SENSOR{name:"Sensor4"})
Create(:PLC{name:"Plc3"})-[:hasSensor]->(:SENSOR{name:"Sensor5"})
Create(:PLC{name:"Plc3"})-[:hasSensor]->(:SENSOR{name:"Sensor6"})
Create(:PLC{name:"Plc4"})-[:hasSensor]->(:SENSOR{name:"Sensor7"})
Create(:PLC{name:"Plc4"})-[:hasSensor]->(:SENSOR{name:"Sensor8"})
Create(:PLC{name:"Plc5"})-[:hasSensor]->(:SENSOR{name:"Sensor9"})
Create(:PLC{name:"Plc5"})-[:hasSensor]->(:SENSOR{name:"Sensor10"})
Create(:PLC{name:"Plc6"})-[:hasSensor]->(:SENSOR{name:"Sensor11"})
Create(:PLC{name:"Plc6"})-[:hasSensor]->(:SENSOR{name:"Sensor12"})
Create(:PLC{name:"Plc7"})-[:hasSensor]->(:SENSOR{name:"Sensor13"})
Create(:PLC{name:"Plc7"})-[:hasSensor]->(:SENSOR{name:"Sensor14"})
Create(:PLC{name:"Plc8"})-[:hasSensor]->(:SENSOR{name:"Sensor15"})
Create(:PLC{name:"Plc8"})-[:hasSensor]->(:SENSOR{name:"Sensor16"})
However the Match to get the sensors under SHOP1
MATCH(s:SHOP{name:"Shop1"})-[:hasChiller]->(cc:CHILLER)-[:hasPLC]->(pp:PLC)-[:hasSensor]->(ss:SENSOR) return ss.name
returns nothing.Says no changes and no data.
I am trying this out on Neo4J sandbox environment.I did this based on the understanding i had using match clause in SQL SERVER GRAPH 2019 where this works.
Can anyone point out where i am going wrong?
You are improperly creating multiple instances of the "same" node. You should create each node once, and then use its bound variable name later on when you need to create relationships involving that node.
Delete all your data and follow this pattern instead (you have to fill in the "..." parts):
CREATE
(sh1:SHOP{name:"Shop1"}), (sh2:SHOP{name:"Shop1"}),
(c1:CHILLER{name:"Chiller1"}), (c2:CHILLER{name:"Chiller2"}),(c3:CHILLER{name:"Chiller3"}), (c4:CHILLER{name:"Chiller4"}),
(p1:PLC{name:"Plc1"}), ..., (p8:PLC{name:"Plc8"}),
(se1:SENSOR{name:"Sensor1"}), ..., (se16:SENSOR{name:"Sensor16"}),
(sh1)-[:hasChiller]->(c1), (sh1)-[:hasChiller]->(c2),
... // create remaining relationships using bound variable names for nodes

Titan Graph Queries taking too long to execute

I have a problem with the executing speed of Titan queries.
To be more specific:
I created a property file for my graph using BerkeleyJe which is looking like this:
storage.backend=berkeleyje
storage.directory=/finalGraph_script/graph
Afterwards, i opened the Gremlin.bat to open my Graph.
I set up all the neccessary Index Keys for my nodes:
m = g.getManagementSystem();
username = m.makePropertyKey('username').dataType(String.class).make()
m.buildIndex('byUsername',Vertex.class).addKey(username).unique().buildCompositeIndex()
m.commit()
g.commit()
(all other keys are created the same way...)
I imported a csv file containing about 100 000 lines, each line is producing at least 2 nodes and some edges. All this is done via Batchloading.
That works without a Problem.
Then i execute a groupBy query which is looking like that:
m = g.V.has("imageLink").groupBy{it.imageLink}{it.in("is_on_image").out("is_species")}{it._().species.groupCount().cap.next()}.cap.next()
With this query i want for every node with the property key "imageLink" the number of the different "species". "Species" are also nodes, and can be called by going back the edge "is_on_image" and following the edge "is_species".
Well this is also working like a charm, for my recent nodes. This query is taking about 2 minutes on my local PC.
But now to the problem.
My whole dataset is a csv with 10 million entries. The structure is the same as above, and each line is also creating at least 2 nodes and some edges.
With my local PC i cant even import this set, causing an Memory Exception after 3 days of loading.
So I tried the same on a server with much more RAM and memory. There the Import works, and takes about 1 day. But the groupBy failes after about 3 days.
I actually dont know if the groupBy itself fails, or just the Connection to the Server after that long time.
So my first Question:
In my opinion about 15 million nodes shouldn't be that big deal for a graph database, should it?
Second Question:
Is it normal that it takes so long? Or is there anyway to speed it up using indices? I configured the indices as listet above :(
I don't know which exact information you need for helping me, but please just tell me what you need in addition to that.
Thanks a lot!
Best regards,
Ricardo
EDIT 1: The way im loading the CSV in the Graph:
I'm using this code, i deleted some unneccassry properties, which are also set an property for some nodes, loaded the same way.
bg = new BatchGraph(g, VertexIDType.STRING, 10000)
new File("annotation_nodes_wNothing.csv").eachLine({ final String line ->def (annotationId,species,username,imageLink) = line.split('\t')*.trim();def userVertex = bg.getVertex(username) ?: bg.addVertex(username);def imageVertex = bg.getVertex(imageLink) ?: bg.addVertex(imageLink);def speciesVertex = bg.getVertex(species) ?: bg.addVertex(species);def annotationVertex = bg.getVertex(annotationId) ?: bg.addVertex(annotationId);userVertex.setProperty("username",username);imageVertex.setProperty("imageLink", imageLink);speciesVertex.setProperty("species",species);annotationVertex.setProperty("annotationId", annotationId);def classifies = bg.addEdge(null, userVertex, annotationVertex, "classifies");def is_on_image = bg.addEdge(null, annotationVertex, imageVertex, "is_on_image");def is_species = bg.addEdge(null, annotationVertex, speciesVertex, "is_species");})
bg.commit()
g.commit()

Automatically create new objects in a Plone Folder, being the ids only sequential numbers

I have the following structure:
/Plone/folder/year/month/day/id
I want to create the last id sequentially in a tool using invokeFactory. I want to have:
/Plone/folder/2011/06/21/1
/Plone/folder/2011/06/21/2
/Plone/folder/2011/06/21/3
/Plone/folder/2011/06/21/4
Instead of:
/Plone/folder/2011/06/21/id-1
/Plone/folder/2011/06/21/id-2
/Plone/folder/2011/06/21/id-3
/Plone/folder/2011/06/21/id-4
...that is automatically done when I try to create an object with the same name in a folder, Plone handles that for me adding a sequential number. I want an efficient way to create objects, but only using a sequential number instead of a name with a sequential number. I can do that getting the total number os items in a folder, but would like to know if there's a better way.
Real life example: http://plone.org/products/collective.captcha/issues/4
If you're creating these objects manually, you can do something like:
brains = context.getFolderContents({'sort_on' : 'id', 'sort_order' : "reverse"})
if len(brains) > 0:
id = str(int(brains[0].id) + 1)
else:
id = '1'
Then you'll have to create the object manually with that id.
If you want this to do done automatically for you when a user creates content, you might want to look into creating a content rule to change the id of the content for you. An example that might help is collective.contentrules.yearmonth

Resources