GraphDb side
Vertex:User
Edge:Has
Vertex:Car
Object Side
public class User {
public string Name { get; set; }
[GraphEdge("HAS_CAR")]
public ICollection<Car> Cars { get; set; }
}
Problem
I want to get User X with Cars property from Neo4J via Gremlin? (I'm using Neo4jClient)
It's so similar Include method of Linq to Entity..
Best regards
Assuming a graph like this:
You would use a Gremlin query like this to retrieve all of the cars, for all users:
g.v(0).out('HAS_USER').out('HAS_CAR')
Now, lets filter it down to just the red cars:
g.v(0).out('HAS_USER').out('HAS_CAR').filter { it.Color == "Red" }
Finally, you want the users instead of the cars. It's easiest to think of Gremlin working like an actual gremlin (little creature). You've told him to run to the users, then down to each of the cars, then to check the color of each car. Now you need him to go back to the users that he came from. To do this, we put a mark in the query like so:
g.v(0).out('HAS_USER').as('user').out('HAS_CAR').filter { it.Color == "Red" }.back('user')
To write this in C# with Neo4jClient is then very similar:
graphClient
.RootNode
.Out<User>(HasUser.TypeKey)
.As("user")
.Out<Car>(HasCar.TypeKey, c => c.Color == "Red")
.BackV<User>("user")
The only difference here is that you need to use BackE or BackV for edges and vertexes respectively intead of just Back. This is because in the staticaly typed world of C# we need to use different method names to be able to return different enumerator types.
I hope that helps! :)
--
Tatham
Oğuz,
Now that you have updated the question, I understand it better.
GraphEdgeAttribute is not part of Neo4jClient, so I'm not sure where it has come from.
In Neo4jClient, we do not load deep objects. That is, we will not follow properties and load further collections. We do this because a) it would require us to do lots of roundtrips to the server and b) we think you should be explicit about what data you actually want to load. We do not want to be an equivalent to the Spring Data for Neo4j project because I do not believe it is a good approach.
It sounds like you might want to look at Cypher instead of Gremlin. That will let you load data as tables, including projections from multiple nodes.
-- Tatham
Related
I would like to be able to provide a list of all the properties across all documents in a collection.
The best way I can come up with is to query for all documents and then build the list in the client, but this feels wrong.
The only way to do what you want is to read all of the documents. However, if you are worried about bandwidth, then you can do it in a stored procedure that only returns the list of properties.
If you take that route, I recommend that you start with the countDocuments sproc here and be prepared to call as many times as necessary until the continuation comes back empty and there are no 429 errors... or use documentdb-utils which takes care of that for you.
Alternatively, I could give you a full on example here. Just let me know.
Another approach would be to maintain a list of properties as documents are being written. This would be preferred if you need this list often.
You can store Documents with any kind of structure in a collection, they could be all different.
You are not restricted in a collection to store all objects from the same "schema".
So, getting all the properties available on a collection is not really something supported by the DocumentDB API or SDK, you either, read the whole collection, or rely on some sort of convention that you make when you create objects.
You can use Slazure for this. Example follows which lists all property names for a given set of documents:
using SysSurge.Slazure.AzureDocumentDB.Linq;
using SysSurge.Slazure.Core;
using SysSurge.Slazure.Core.Linq.QueryParser;
public void ShowPropertyNames()
{
// Get a reference to the TestCstomers collection
dynamic storage = new QueryableStorage<DynDocument>("URL=https://contoso.documents.azure.com:443/;DBID=DDBExample;TOKEN=VZ+qKPAkl9TtX==");
QueryableCollection<DynDocument> collection = storage.TestCustomers;
// Build collection query
var queryResult = collection.Where("SignedUpForNewsletter = true and Age < 22");
foreach (DynDocument document in queryResult)
{
foreach (KeyValuePair<string, IDynProperty> keyValuePair in document)
{
Console.WriteLine(keyValuePair.Key);
}
}
}
I am having a hard time understanding the way to return back data in gremlin syntax when you have vertices that combine to create a complex object.
my syntax below will be in gremlin syntax used with gremlin.net to access cosmos db - so the graphSON returned through cosmos and then my POCO object is in C# syntax.
Say I had an example that was more like a tree, where everything was related, but I didn't want repeated data. So if I have a property - like an apt. You have the Property vertex, Room vertices, People Vertices. If I was doing a standard C# POCO, it may look like this:
public class Property{
public List<Room> Rooms {get;set;}
public List<Person> Managers {get;set;}
//additional general properties of the property - name, address, etc
}
public class Room {
public List<Person> Tenants {get;set;}
//other room properties - number, size, etc
}
public class Person{
//general properties - name, age, gender, etc
}
So I have been trying to somewhat replicate that structure in graphSON syntax, but once I get down one level, it doesn't really seem like that's what is done - at least I haven't found any examples. I was expecting to be able to potentially get the graphSON to look more like this when returned:
"property":{
"basic":{
//the property vertex
},
"managers":[ //array of managers
"manager":{
//person vertex
}
],
"rooms":[ //array of rooms
"room":{
//a room vertex
},
"tenants":[
{
"tenant":{
//person vertex
}
}
]
]
}
The one other caveat, is generally I may want certain properties returned or only parts and not the entire vertex - so most likely valueMap or something like that.
I've tried sideEffects, flatMap, maps, local in a variety of ways to see if I could get it, but it always seems to fall apart fairly quickly.
If I do a call like this:
g.V('my-key').as('property').flatmap(out('a-room')).as('r').select('property','r')
I'll get something more like this for return:
[
{
"property":{} //property vertex
"r":{}//a room vertex
},
{
"property":{} //property vertex
"r":{}//a room vertex
},
//repeat until you have all rooms
]
that causes a lot of extra data returned because I only need property info once.
g.V('my-key').as('p').local(out('a-room').fold()).as('r').unfold().local(out('a-tenant').fold()).as('t').select('p','r','t')
That causes duplicate data again and keeps everything one level down and not sub levels.
So my question is:
is the graphSON format I proposed possible?
Am I thinking in the wrong way when trying to pull back data?
Is what I'm doing uncommon with graphDBs as I've had a hard time finding this type of single to multiple relationships with multi levels to create a complex object.
When asking questions about Gremlin it's always best to include a brief Gremlin script that can create some sample data as it makes it incredibly easy for those who answer to give you an exact/tested traversal that can solve your problem (example).
As to your question, you can definitely return data in whatever form you need. It might help to read this recipe in the TinkerPop documentation on Collections. In your case, I think you just need a nested project() type of traversal:
g.V("my-key").
project('name','address', 'rooms')
by('name').
by('address').
by(out('a-room').
project('number','size','tenantCount')
by('number').
by('size').
by(out('a-tenant').count()).
fold())
Current project:
ASP.NET 4.5.2
MVC 5
EF 6
In all honesty, I have never made use of a mapper before, and while the ExpressMapper tutorial bounces across the high-altitude highlights, it makes several assumptions about knowledge that I don’t have.
So in no general order:
The product is supposed to have all its code centralized in one spot. Where is this spot? Where do I put it? The examples start out with,
public void MappingRegistration() {
Mapper.Register<Product,ProductViewModel>();
}
But I don’t know where to put this. Does it go into its own file or is it in another file, such as within App_Start?
If it is elsewhere in the project, do I create it under its own namespace?
If I have a viewModel that is filled in a different way than its dataModel is filled, how do I handle each type separately? As in, the data is pulled out of the DB and fills the viewModel with completely different conditional rules than how the data is pulled from the viewModel and inserted into or updated back to the database.
How do I bring in external conditionals that affect how the data and which data is inserted into the DB, such as the Role of the user, their UserId and UserName, and various project Settings? Depending on conditionals, some entries may end up with a null value instead of an actual value. How can I do business logic validation using these conditionals (User is actually updating his own record, by comparing their session UserId with the UserId stored in the DB)?
Right now I am doing a lot of manual mapping in the Models but this is problematic especially since the method I am using (to cut down on code in the controller) means that during an update I cannot examine an entry in the DB prior to updating it in the DB.
You can stick it anywhere you want - the only thing necessary is that it gets called in code, before you call Mapper.Map<Product,ProductViewModel>.
E.g.
public static void main()
{
Mapper.Register<Product,ProductViewModel>();
}
is functionally the same as
public static void main()
{
RegisterMapping();
}
public static void RegisterMapping()
{
Mapper.Register<Product,ProductViewModel>();
}
If you want to map one class member to another class member with a different name, you can specify it with Member mapping.
Mapper.Register<Product, ProductViewModel>()
.Member(dest => dest.efgh, src => src.abcd);
If you want to apply special conversion rules, you can specify that with a Function mapping - e.g. you want the price in the ProductViewModel to be 2x the price of the product :
Mapper.Register<Product, ProductViewModel>()
.Function(dest => dest.Price, src => src.Price*2);
Any customisation that you make to the mapping should be done at the time you register the mapping, and has to be done on a Member-by-Member basis AFAIK.
If there's anything else specific that you need help with, leave a comment.
I want to do the following using Asp.net Web API 2 and RavenDB.
Send a string to RavenDB.
Lookup a document containing a field called UniqueString that contain the string i passed to RavenDB.
Return either the document that matched, or a "YES/NO a document with that string exists" - message.
I am completely new to NoSQL and RavenDB, so this has proven to be quite difficult :-) I hope someone can assist me, and i assume it is actually quite easy to do though i haven't been able to find any guides showing it.
This has nothing to do with WebAPI 2, but you can do what you ask for using RavenDb combined with WebAPI 2.
First you need to have an index (or let RavenDb auto create one for you) on the document and property/properties you want to be indexed. This index can be created from code like this:
public class MyDocument_ByUniqueString : AbstractIndexCreationTask<MyDocument>
{
public override string IndexName
{
get { return "MyDocumentIndex/ByUniqueString"; }
}
public MyDocument_ByUniqueString()
{
Map = documents => from doc in documents
select new
{
doc.UniqueString
};
}
}
or created in the RavenDb Studio:
from doc in docs.MyDocuments
select new {
doc.UniqueString
}
After that you can do an "advanced document query" (from a WebAPI 2 controller or similar in your case) on that index and pass in a Lucene wildcard:
using (var session = documentStore.OpenSession())
{
var result = session.Advanced
.DocumentQuery<MyDocument>("MyDocumentIndex/ByUniqueString")
.Where("UniqueString: *uniq*")
.ToList();
}
This query will return all documents that has a property "UniqueString" that contains the term "uniq". The document in my case looked like this:
{
"UniqueString": "This is my unique string"
}
Please note however that these kind of wildcards in Lucene might not be super performant as they might need to scan large amount of texts. In the RavenDB documentation there's even a warning aginst this:
Warning
RavenDB allows to search by using such queries but you have to be
aware that leading wildcards drastically slow down searches. Consider
if you really need to find substrings, most cases looking for words is
enough. There are also other alternatives for searching without
expensive wildcard matches, e.g. indexing a reversed version of text
field or creating a custom analyzer.
http://ravendb.net/docs/article-page/2.0/csharp/client-api/querying/static-indexes/searching
Hope this helps!
Get the WebApi endpoint working to collect your input. This is independent of RavenDB.
Using the RavenDB client, query the database using Linq or one of the other methods.
After the document is retrieved you may need to write some logic to return the expected result.
I skipped the step where the database gets populated with the data to query. I would leverage the RavenDB client tools as much as possible in your app vs trying to use the HTTP api.
I'm trying to write some code using pure SQL using ASP.NET MVC.
I assume I should be building a model, and sticking to the MVC pattern.
Any suggestions for good practice would be highly appreciated, and examples very useful too. For example I'm not sure if I should be splitting this code off from my main repository's, and if I should, where should I put it?
Also I will be attempting to return data from 2 tables in this query.
The kind of query I would like to use is like this.
See top answer from this page
How to implement high performance tree view in SQL Server 2005
Also
string sqlGetQuestionAnswers = "SELECT TOP (10) * FROM tblquestion ORDER BY NEWID()";//
using (SqlDataAdapter dapQuestions = new SqlDataAdapter(sqlGetQuestionAnswers, ConfigurationManager.ConnectionStrings["SiteConnectionString"].ToString()))
{
DataSet dsQuestions = new DataSet();
dapQuestions.Fill(dsQuestions);
if (dsQuestions.Tables[0].Rows.Count > 0)
{
work with data;
}
else
{
Error;
}
}
Given you want a SQL to MODEL approach this might work for you.
I'm using a LinqToSQL data context here;
I have a table of Articles that contains let's say 10 fields but all I want is the title so I create a class;
public class Art
{
string title { get; set; }
}
Then I have my data context object
static ArticlesDataContext dc = new
ArticlesDataContext(System.Configuration.ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString);
Then I can fill my, albeit simple, model;
var arts = dc.ExecuteQuery<Art>(#"Select * from articles");
Does this help or am I off base?
Leave it in your repository. The purpose of a repository is to abstract away your domain operations - if every function uses a different datasource and different methods of accessing the data (sql, file IO, http), so be it - the repository's clients won't know the difference.
Obviously the more cohesive you make the repository though, the easier it will be to maintain. However, this code definitely belongs there.
This seems like the "bloody knuckle" approach - you're really not using any of the 3.5 features that solve problems like this on your behalf.
That said, I would suggest that you build business objects in your model folder, and let your business objects handle their persistence using your SQL. Don't put the SQL in your controller, and definitely not in your view. Maintain a clear separation between these layers, and your life will be much easier.