I'm loading RDF data into a JUNG graph to do some analysis.
So I create a new graph with:
DirectedGraph g = new DirectedSparseGraph<String,GraphLink>();
I created a support class for specifying the link:
public class GraphLink {
String uri;
Float weight;
}
Then I populate it like this:
for each rdf triple <s,p,o>{
g.addVertex( s )
g.addVertex( o )
GraphLink link = new GraphLink()
link.uri = pred
link.weight = some weight;
g.addEdge( link, s, o )
}
Is this an efficient way of doing it or there are better ways?
The representation of the edges is very counterintuitive, but if I do:
g.addEdge( p, s, o )
I get an exception of duplicated edge.
Any hints?
UPDATE: this code seems to work well:
DirectedGraph<RDFNode,Statement> g = new DirectedSparseGraph<RDFNode,Statement>()
// list all statements
// TODO: pagination for very large graphs.
assert m.size() < 10000000,"graph is too large."
m.listStatements().each{ stm->
RDFNode sub = stm.getSubject()
RDFNode obj = stm.getObject()
g.addVertex( sub )
if ( includeLiterals || !obj.isLiteral() ){
g.addVertex( obj )
g.addEdge( stm, sub, obj, EdgeType.DIRECTED )
}
}
Mulone
This may not be what you want at all, but you could try JenaJung, which presents a jena model as a Jung graph.
From the README file:
Model model = FileManager.get().loadModel("http://example.com/data.rdf");
Graph<RDFNode, Statement> g = new JenaJungGraph(model);
Layout<RDFNode, Statement> layout = new FRLayout(g);
layout.setSize(new Dimension(300, 300));
BasicVisualizationServer<RDFNode, Statement> viz =
new BasicVisualizationServer<RDFNode, Statement>(layout);
Related
When I use deeplearning4j and try to train a model in Spark
public MultiLayerNetwork fit(JavaRDD<DataSet> trainingData)
fit() need a JavaRDD parameter,
I try to build like this
val totalDaset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
but the tip of IDEA is No implicit arguments of type:Encode[DataSet]
it's a error and I dont know how to solve this problem,
I know SparkRDD can transform to JavaRDD, but I dont know how to build a Spark RDD[DataSet]
DataSet is in import org.nd4j.linalg.dataset.DataSet
Its construction method is
public DataSet(INDArray first, INDArray second) {
this(first, second, (INDArray)null, (INDArray)null);
}
this is my code
val spark:SparkSession = {SparkSession
.builder()
.master("local")
.appName("Spark LSTM Emotion Analysis")
.getOrCreate()
}
import spark.implicits._
val JavaSC = JavaSparkContext.fromSparkContext(spark.sparkContext)
val csv=spark.read.format("csv")
.option("header","true")
.option("sep",",")
.load("/home/hadoop/sparkjobs/LReg/data.csv")
val totalDataset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
val data = totalDataset.toJavaRDD
create JavaRDD[DataSet] by Java in deeplearning4j official guide:
String filePath = "hdfs:///your/path/some_csv_file.csv";
JavaSparkContext sc = new JavaSparkContext();
JavaRDD<String> rddString = sc.textFile(filePath);
RecordReader recordReader = new CSVRecordReader(',');
JavaRDD<List<Writable>> rddWritables = rddString.map(new StringToWritablesFunction(recordReader));
int labelIndex = 5; //Labels: a single integer representing the class index in column number 5
int numLabelClasses = 10; //10 classes for the label
JavaRDD<DataSet> rddDataSetClassification = rddWritables.map(new DataVecDataSetFunction(labelIndex, numLabelClasses, false));
I try to create by scala:
val JavaSC: JavaSparkContext = new JavaSparkContext()
val rddString: JavaRDD[String] = JavaSC.textFile("/home/hadoop/sparkjobs/LReg/hf-data.csv")
val recordReader: CSVRecordReader = new CSVRecordReader(',')
val rddWritables: JavaRDD[List[Writable]] = rddString.map(new StringToWritablesFunction(recordReader))
val featureColnum = 3
val labelColnum = 1
val d = new DataVecDataSetFunction(featureColnum,labelColnum,true,null,null)
// val rddDataSet: JavaRDD[DataSet] = rddWritables.map(new DataVecDataSetFunction(featureColnum,labelColnum, true,null,null))
// can not reslove overloaded method 'map'
debug error infomations:
A DataSet is just a pair of INDArrays. (inputs and labels)
Our docs cover this in depth:
https://deeplearning4j.konduit.ai/distributed-deep-learning/data-howto
For stack overflow sake, I'll summarize what's here since there's no "1" way to create a data pipeline. It's relative to your problem. It's very similar to how you you would create a dataset locally, generally you want to take whatever you do locally and put that in to spark in a function.
CSVs and images for example are going to be very different. But generally you use the datavec library to do that. The docs summarize the approach for each kind.
I want to list sparql query result into textbox(multi line) or Grid view or list
But the code shown below returns only one result!
Please any help?
IGraph g = new Graph();
g.LoadFromFile("example.owl");
try
{
SparqlQueryParser par = new SparqlQueryParser();
SparqlQuery q = par.ParseFromString(#"PREFIX uni:<http://www.semanticweb.org/salim/ontologies/2018/10/university-ontology-2#>SELECT ?P_Name (COUNT(?P_Name) AS ?Material_Num)
WHERE
{
?P uni:Have ?Material;
uni:P_Name ?P_Name.
}
GROUP BY ?P_Name");
object results = g.ExecuteQuery(q);
if (results is SparqlResultSet)
{
SparqlResultSet rset = (SparqlResultSet)results;
foreach (SparqlResult r in rset)
{
TextBox1.Text = r.ToString();
//or
GridView1.DataSource = r.ToString();
GridView1.DataBind();
}
}
}
I know that you can use tables in a similar way to pointers in lua. That being said, what would pointers to pointers look like? Would they look something like dp = {p = {}}? if so what would the equivalent to the c code below be in lua?
void InsertItem(node **head, node *newp){
node **dp = head;
while((*dp) && (*dp)->value > newp->value
{
dp = &(*dp)->next;
}
newp->next = *dp;
*dp = newp;
}
Yes, double pointer may be translated to Lua as nested table.
local function InsertItem(head, newitem)
while head.next and head.next.value > newitem.value do
head = head.next
end
newitem.next = head.next
head.next = newitem
end
-- Typical Usage:
local head = {}
InsertItem(head, {value = 3.14})
InsertItem(head, {value = 42})
InsertItem(head, {value = 1})
-- Now the data is the following:
-- head = {next = elem1}
-- elem1 = {next = elem2, value = 42 }
-- elem2 = {next = elem3, value = 3.14}
-- elem3 = { value = 1 }
The big difference between C pointers and Lua tables is that in C, you can take the address of a variable and pass it to a function to modify it. You can't do that in Lua, but the function could always return the modified value.
Would they look something like dp = {p = {}}?
Yes, that's about as close as close as you can get to a pointer to a pointer in Lua.
if so what would the equivalent to the c code below be in lua?
Linked lists tend to work more smoothly with recursion:
local function InsertItem(head, newp)
if not head or head.value <= newp.value then
newp.next = head
return newp
end
head.next = InsertItem(head.next, newp)
return head
end
i'm wondering if anyone can help me figure out how to programatically set the document number and date in the general journal trans, under the tab invoice. I'm trying to post to the general journal in ax 2012 with x++. I currently have this code that works but there is no method under the ledgerjournal trans to set the document no or date. infact alot of the setters are missing and only has linenum account type, journal num etc etc.
How can i set these fields? below i have some code
static void TestLedgerJournalImport(Args _args)
{
// Set these variables.
LedgerJournalNameId journalName = 'GenJrn';
SelectableDataArea company = '019';
TransDate transactionDate = 30\6\2012;
str line1MainAccount = '131310';
str line1FullAccount = '131310--';
str line2MainAccount = '131310';
str line2FullAccount = '131310-10-';
str line2Dimension1Name = 'Department';
str line2Dimension1Value = 'ACCT';
LedgerGeneralJournalService ledgerGeneralJournalService;
LedgerGeneralJournal ledgerGeneralJournal;
AfStronglyTypedDataContainerList journalHeaderCollection;
LedgerGeneralJournal_LedgerJournalTable journalHeader;
AifEntityKeyList journalHeaderCollectionKeyList;
RecId journalHeaderRecId;
AfStronglyTypedDataContainerList journalLineCollection;
LedgerGeneralJournal_LedgerJournalTrans journalLine1;
AifMultiTypeAccount journalLine1LedgerDimension;
LedgerGeneralJournal_LedgerJournalTrans journalLine2;
AifMultiTypeAccount journalLine2LedgerDimension;
AifDimensionAttributeValue journalLine2Dim1;
AfStronglyTypedDataContainerList journalLine2DimensionCollection;
;
ledgerGeneralJournalService = LedgerGeneralJournalService::construct();
ledgerGeneralJournal = new LedgerGeneralJournal();
// Create journal header.
journalHeaderCollection = ledgerGeneralJournal.createLedgerJournalTable();
journalHeader = journalHeaderCollection.insertNew(1);
journalHeader.parmJournalName(journalName);
// Create journal lines.
journalLineCollection = journalHeader.createLedgerJournalTrans();
// Line 1
journalLine1 = journalLineCollection.insertNew(1);
journalLine1.parmLineNum(1.00);
journalLine1.parmCompany(company);
journalLine1.parmTransDate(transactionDate);
journalLine1.parmAccountType(LedgerJournalACType::Ledger);
journalLine1.parmTxt('Test journal transaction');
journalLine1.parmAmountCurDebit(100.00);
journalLine1LedgerDimension = journalLine1.createLedgerDimension();
journalLine1LedgerDimension.parmAccount(line1MainAccount);
journalLine1LedgerDimension.parmDisplayValue(line1FullAccount);
journalLine1.parmLedgerDimension(journalLine1LedgerDimension);
// Line 2
journalLine2 = journalLineCollection.insertNew(2);
journalLine2.parmLineNum(2.00);
journalLine2.parmCompany(company);
journalLine2.parmTransDate(transactionDate);
journalLine2.parmAccountType(LedgerJournalACType::Ledger);
journalLine2.parmTxt('Test journal transaction');
journalLine2.parmAmountCurCredit(100.00);
journalLine2LedgerDimension = journalLine2.createLedgerDimension();
journalLine2DimensionCollection = journalLine2LedgerDimension.createValues();
journalLine2Dim1 = new AifDimensionAttributeValue();
journalLine2Dim1.parmName(line2Dimension1Name);
journalLine2Dim1.parmValue(line2Dimension1Value);
journalLine2DimensionCollection.add(journalLine2Dim1);
journalLine2LedgerDimension.parmAccount(line2MainAccount);
journalLine2LedgerDimension.parmDisplayValue(line2FullAccount);
journalLine2LedgerDimension.parmValues(journalLine2DimensionCollection);
journalLine2.parmLedgerDimension(journalLine2LedgerDimension);
// Insert records.
journalHeader.parmLedgerJournalTrans(journalLineCollection);
ledgerGeneralJournal.parmLedgerJournalTable(journalHeaderCollection);
journalHeaderCollectionKeyList =
LedgerGeneralJournalService.create(ledgerGeneralJournal);
journalHeaderRecId =
journalHeaderCollectionKeyList.getEntityKey(1).parmRecId();
info(strFmt("LedgerJournalTable.Recid = %1", int642str(journalHeaderRecId)));
}
Don't do it like that, you're making more work for yourself. I just wrote this example for you. I hacked up a more complex piece of code I wrote, so the offsetDefaultDimension I just left in for some example code.
static void Job3(Args _args)
{
AxLedgerJournalTable journalTable = AxLedgerJournalTable::construct();
LedgerJournalTable ledgerJournalTable;
LedgerJournalName ledgerJournalName = LedgerJournalName::find('GenJrn');
AxLedgerJournalTrans journalTrans = AxLedgerJournalTrans::construct();
DimensionAttribute dimensionAttribute;
DimensionAttributeValue dimensionAttributeValue;
DimensionAttributeValueSetStorage dimStorage;
LedgerDimensionAccount ledgerDimension = DimensionDefaultingService::serviceCreateLedgerDimension(DimensionStorage::getDefaultAccountForMainAccountNum('131310'));
journalTable.parmJournalName(ledgerJournalName.JournalName);
journalTable.parmJournalType(ledgerJournalName.JournalType);
journalTable.save();
ttsBegin;
ledgerJournalTable = LedgerJournalTable::findByRecId(journalTable.ledgerJournalTable().RecId, true);
// The name gets reset if no journal number is provided, so we can just update afterwords
ledgerJournalTable.Name = 'My Custom Journal Name/Description';
ledgerJournalTable.update();
ttsCommit;
journalTrans.parmJournalNum(journalTable.ledgerJournalTable().JournalNum);
journalTrans.parmTransDate(today());
journalTrans.parmCurrencyCode('USD');
journalTrans.parmTxt('AlexOnDAX.blogspot.com');
journalTrans.parmDocumentNum('MyDocNumber');
journalTrans.parmDocumentDate(today() - 1);
journalTrans.parmAccountType(LedgerJournalACType::Ledger);
journalTrans.parmLedgerDimension(DimensionAttributeValueCombination::find(ledgerDimension).RecId);
journalTrans.parmAmountCurDebit(100.00);
journalTrans.save();
info("Done");
}
I have been working on sorting Arraycollection like ascending , descending the numeric list. Total length of my collection will go up to 100. Now I want to preform sort to nested data like this
Data Structure
Name : String
Categories : Array ["A","x or y or z","C"]
Categories array will have maximum 3 items , out of that three items the second item can have 3 different values either X or Y or Z. My result data looks like here
{"Mike" , ["A","x","C"]}
{"Tim" , ["A","y","C"]}
{"Bob" , ["A","x","C"]}
{"Mark" , ["A","z","C"]}
{"Peter" , ["A","z","C"]}
{"Sam" , ["A","y","C"]}
anyone please explain how to sort this type of data in a way showing all "x" first , "y" next and "z" at the last and vice a versa. Any help is really appreciated. Thanks Anandh. .
You can specify a compare function in your SortField like this:
var sortfield:SortField = new SortField("Categories");
sortfield.compareFunction = myCompare;
var sort:Sort = new Sort();
sort.fields = [sortfield];
yourCollection.sort = sort;
and your compare function:
function myCompare(a:Object, b:Object):int {
/*
return -1, if a before b
return 1, if b before a
return 0, otherwise
*/
}
or something like that.. and it's untested code :)
I have created a new property to the data structure called categoryOrder In the setter I did the following and Am using the categoryOrder for sorting - sortBy = categoryOrder;. I understand little hard coding is needed but still I believe this will reduce the number of comparisons when I use compareFunction. Anyone please valid this idea. Thanks!
public function set categories(data:ArrayCollection) :void
{
if(data != null)
{
_categories = data;
for each(var categorie:Object in data)
{
switch(categorie.categoryName)
{
case "x":{categoryOrder = 1;break;}
case "y":{categoryOrder = 2;break;}
case "z":{categoryOrder = 3;break;}
}
}
}
}
Data Structure
Name : String
Categories : Array ["A","x or y or z","C"]
categoryOrder : Number