I created a model from the database table using Scaffolding in .NET Core 5.0:
Scaffold-DbContext "Server=XXX;Database=XXX;User ID=XXX;Password=XXX;" Microsoft.EntityFrameworkCore.SqlServer -Tables SomeTable
SomeTable.cs
public partial class SomeTable
{
public Guid Uid { get; set; }
public Guid? TypeUid { get; set; }
public string Sender { get; set; }
public DateTime? DateSent { get; set; }
public bool IsSend { get; set; }
public Guid? RefUid { get; set; }
}
some.proto
syntax = "proto3";
option csharp_namespace = "Test.Services";
import "google/protobuf/timestamp.proto";
import "google/protobuf/wrappers.proto";
package some;
service SomeService {
rpc GetSome (SomeRequest) returns (SomeReply);
}
message SomeRequest {
int32 Count = 1;
}
message SomeToSent {
string Uid = 1;
google.protobuf.StringValue TypeUid = 2;
google.protobuf.StringValueSender = 3;
google.protobuf.Timestamp DateSent = 4;
bool IsSend = 5;
google.protobuf.StringValue RefUid = 6;
}
message SomeReply {
int32 Count = 1;
repeated SomeToSent Some = 2;
}
But I don't know how to send it via GRPC clearly, because if I override rpc GetSome, then I need to send a repeated SomeToSent (and Count). But I don't want to map SomeTable to SomeToSent. I just want to send the list of SomeTable objects.
Concrete question definition
How to properly send database objects via GRPC without mapping to proto models?
P.S.
Yesterday, I created REST Api, which returns some of SomeTable rows in JSON format. Also, I made service, which returns these rows via GRPC. There're just about the same results if compare them by speed. But I spent more time on manual objects mapping. Every article which I was read told me that I must try to use GRPC instead of REST in case of CRUD in the REST. But I really don't see the difference. I drew a very approximate diagram of elapsed time (10000 rows):
This diagram based on debug info. And I still think that this mapping slows down the service response. Please correct me if I'm wrong.
IIUC you must map between protos and your generated database table classes:
gRPC can only transmit suitably-formatted objects over the network;
the database expects suitably-formatted objects to persist|retrieve.
A benefit of this abstraction is that, if you change the database table schema, you need only revise the map rather than necessarily change the proto message and vice versa.
Another benefit is that, if you add different sources and sinks you your solution, you can consider using e.g. the proto messages as a universal format. You will then only need to write n+m mappings instead of n*m.
You may wish to consider a message that includes repeated SomeToSent messages and|or defining an rpc that streams messages.
And if you persist only IMessages?, that way the client will do the unserialise and the server only sends the messages. For example a file like BoockMessage in git of Protobuffer
Related
Maybe I'm missing something really simple out here but gonna ask anyways.....
I am using Xamarin forms (.NET Standard project), MVVMLight, Realm DB and ZXing Barcode Scanner.
I have a realmobject like so...
public class Participant : RealmObject
{
public string FirstName {get; set;}
public string LastName {get; set;}
public string Email {get; set;}
public string RegistrationCode {get; set;}
//More properties skipped out for brevity
}
I have the corresponding viewmodel as follows:
public class ParticipantViewModel
{
Realm RealmInstance
public ParticipantViewModel()
{
RealmInstance = Realms.Realm.GetInstance();
RefreshParticipants();
}
private async Task RefreshParticipants()
{
//I have code here that GETS the list of Participants from an API and saves to the device.
//I am using the above-defined RealmInstance to save to IQueryable<Participant> Participants
}
}
All the above works fine and I have no issues with this. In the same viewmodel, I am also able to fire up the ZXing Scanner and scan a bar code representing a RegistrationCode.
This, in turn, populates the below property (also in the viewmodel) once scanned...
private ZXing.Result result;
public ZXing.Result Result
{
get { return result; }
set { Set(() => Result, ref result, value); }
}
and calls the below method (wired up via the ScanResultCommand) to fetch the participant bearing the scanned RegistrationCode.
private async Task ScanResults()
{
if (Result != null && !String.IsNullOrWhiteSpace(Result.Text))
{
string regCode = Result.Text;
await CloseScanner();
SelectedParticipant = Participants.FirstOrDefault(p => p.RegistrationCode.Equals(regCode, StringComparison.OrdinalIgnoreCase));
if (SelectedParticipant != null)
{
//Show details for the scanned Participant with regCode
}
else
{
//Display not found message
}
}
}
I keep getting the below error....
System.Exception: Realm accessed from incorrect thread.
generated by the line below....
SelectedParticipant = Participants.FirstOrDefault(p => p.RegistrationCode.Equals(regCode, StringComparison.OrdinalIgnoreCase));
I'm not sure how this is an incorrect thread but any ideas on how I can get around to fetching the scanned participant either from the already populated IQueryable or from the Realm representation directly would be greatly appreciated.
Thanks
Yes, you're getting a realm instance in the constructor, and then using it from an async task (or thread). You can only access a realm from the thread in which you obtained the reference. Since you're only using a default instance, you should be able to simply obtain a local reference within the function (or thread) where you use it. Try using
Realm LocalInstance = Realms.Realm.GetInstance();
at the top of the function and use that. You'll need to recreate the Participants query to use the same instance as its source too. This will be the case wherever you use async tasks (threads), so either change all to get hold of the default instance on entry or reduce the number of threads that access the realm.
Incidentally I'm surprised you don't get a similar access error from within
RefreshParticipants() - maybe you're not actually accessing data via RealmInstance from there.
I'm using OrmLite with both SqlServer, Oracle and PostgreSQL dialects.
I want to use GUIDs as primary keys and have a simple object, using the AutoId attribute:
public class MyObject
{
[AutoId]
public Guid Id { get; set; }
[Required]
public string Name { get; set; }
...
All goes well with SqlServer and PostgreSQL dialetcs, but with Oracle I get an initial GUID with all zeros in the db, and a subsequent INSERT violates the unique key constraint of my primary key. How can this be accomplished db agnostic so it also works with Oracle?
Based on the source code I'm looking at, it doesn't appear to properly generate GUIDs for anything that's not SQL Server or PostgreSQL, regardless of what the documentation actually says on the README. Relevant code links below:
SQL Server
PostgreSQL
Base Dialect Provider
The best alternative I can provide here is to override the OracleOrmLiteDialectProvider. Specifically, I would override the GetAutoIdDefaultValue method to return "SYS_GUID()" if the field type is a GUID. Sample code below...
public OracleNewGuidOrmLiteDialectProvider : OracleOrmLiteDialectProvider
{
public static OracleNewGuidOrmLiteDialectProvider Instance = new OracleNewGuidOrmLiteDialectProvider();
public string AutoIdGuidFunction { get; set; } = "SYS_GUID()";
public override string GetAutoIdDefaultValue(FieldDefinition fieldDef)
{
return fieldDef.FieldType == typeof(Guid)
? AutoIdGuidFunction
: null;
}
}
To match the rest of the provider implementations, I would recommend creating a OracleNewGuidDialect class, like below...
public static class OracleNewGuidDialect
{
public static IOrmLiteDialectProvider Provider => OracleNewGuidOrmLiteDialectProvider.Instance;
}
Then you would set the provider when you instantiate your OrmLiteConnectionFactory to OracleNewGuidOrmLiteDialectProvider.Instance, similar to below...
var dbFactory = new OrmLiteConnectionFactory(oracleConnectionString, OracleNewGuidDialect.Provider);
This isn't the best solution, but the pluggable nature of ServiceStack ORMLite allows you to control everything to the extent that you need. Hope this helps. Also, quick caveat--I didn't fully bake this solution, so you may need to tweak it, but based on the other implementations of the providers, it seems straightforward.
Does any one have a working example of how to added audit models to an existing project, for Audit.Net.
It is one fantastic component to use, and up until now, my team and I have gotten by with the standard JSON files, however, we'd like to migrate our current solution to our Xamarin application, and would like to store the auditing in the local SQLite database on the device.
However, the documentation for this project is somewhat lacking and there is no concise examples of how to get custom auditing working with Entity Framework.
We have worked through the MD files on the github repo, but we still cannot get auditing to work.
Another question, similar to this has been asked HERE, but there is no definitive example of what the Audit_{entity} table should look like, what fields it MUST contain, and how to set up relationships for it.
We tried to reverse engineer the JSON files into a relational structure, but at the time of asking this question, we have not gotten any auditing to write to the SQLite database.
Sorry about the documentation not helping too much, hope I (or anybody) can provide better documentation in the future.
I am assuming you are using EntityFramework to map your entities
to a SQLite database, and you want to use the EF data
provider
to store the audits events in the same database, in Audit_{entity} tables.
There is no constraint on the schema you want to use for your Audit_{entity} tables, as long as you have a one-to-one relation between your {entity} table and its Audit_{entity} table. Then the mapping can be configured on several ways.
The recommendation for the Audit_{entity} tables is to have the same columns as the audited {entity} table, with any common additional column needed, like a User and a Date defined on an Interface.
So, if all your Audit_{entity} tables has the same columns/properties as its {entity}, and you added some common columns (defined on an interface), the configuration can be set like this:
public class User
{
public int Id { get; set; }
public string Name { get; set; }
}
public class Audit_User : IAudit
{
public int Id { get; set; }
public string Name { get; set; }
// IAudit members:
public string AuditUser { get; set; }
public datetime AuditDate { get; set; }
public string Action { get; set } // "Insert", "Update" or "Delete"
}
Audit.Core.Configuration.Setup()
.UseEntityFramework(x => x
.AuditTypeNameMapper(typeName => "Audit_" + typeName)
.AuditEntityAction<IAudit>((ev, ent, auditEntity) =>
{
auditEntity.AuditDate = DateTime.UtcNow;
auditEntity.AuditUser = evt.Environment.UserName;
auditEntity.AuditAction = ent.Action;
});
Note the interface is not mandatory, but using it makes the configuration cleaner. Also note you can make your Audit_{entity} inherit from your {entity} if you wanted to.
Update
Maybe my assumption at the beginning is incorrect and you are not auditing EF entities, but any other type of audit. If that's the case, what you are looking for is a Data Provider that stores the audit events into your SQLite database.
At the time being, there is no built-in data provider that stores to SQLite, and if there was one, it would store just the JSON representation of the event in one column (like the SQL/MySql providers). But it looks like you want to have a custom schema, so you will need to implement your own data provider.
Check the documentation here.
Here is a sample skeleton of a data provider:
public class SQLiteDataProvider : AuditDataProvider
{
public override object InsertEvent(AuditEvent auditEvent)
{
// Insert the event into SQLite and return its ID
}
public override void ReplaceEvent(object eventId, AuditEvent auditEvent)
{
// Replace the event given its ID (only used for CreationPolicies InsertOnStartReplaceOnEnd and Manual)
}
// async implementation:
public override async Task<object> InsertEventAsync(AuditEvent auditEvent)
{
// Asynchronously insert the event into SQLite and return its ID
}
public override async Task ReplaceEventAsync(object eventId, AuditEvent auditEvent)
{
// Asynchronously replace the event given its ID
}
}
Then you just set it up with:
Audit.Core.Configuration.Setup()
.UseCustomProvider(new SQLiteDataProvider());
I haven't found a solution how to consume other interface then published.
In simple case if I want to publish IMessage and consume IMessage I have to share assembly with IMessage definition between two applications.
But what if this two applications are developing by different companies.
In this case I have two options:
make an agreement about common interfaces, naming conventions etc and share a common library
let both companies do there job as they are used to do and inside service bus (or application server) map data types.
Second option is more appropriate for me, but I haven't found a solution.
For example, I might have an employee in one system as
public interface IEmployee
{
int ID { get; set; }
string FirstName { get; set; }
string LastName { get; set; }
}
And in other system as
public interface ILightEmployee
{
int id { get; set; }
string full_name { get; set; }
}
I want to publish IEmployee and consume ILightEmployee.
During serialization/deserialization phase in service bus I want to
use some mapping of properties and archive something like this (it is more like a pseudo code):
public class ContractMapper
{
public LightEmployee Map(IEmployee employee)
{
return new LightEmployee()
{
id = employee.ID,
full_name = employee.LastName + " " + employee.FirstName
};
}
}
For example MuleESB provides an editor for this transformations/mapping. LINK
It is unnecessary advanced solution for me, but at least in code I want do to the same thing.
Is it possible using Rebus service bus?
As long as Rebus is able to properly deserialize the incoming JSON object into a concrete class, it will attempt to dispatch the message to all polymorphically compatible handlers.
With the default Newtonsoft JSON.NET and the Jil-based JSON serializer, the rbs2-content-type header will be set to application/json;charset=utf-8, and as long as an incoming message's header starts with application/json and has a compatible encoding, both serializers will try to deserialize to the type specified by the rbs2-msg-type header.
So, if you have a matching concrete class available in the app domain, you can have a handler that implements IHandleMessages<IEmployee> or IHandleMessages<ILightEmployee> - or IHandleMessages<object> for that matter, because that handler is polymorphically compatible with all incoming messages.
The Jil serializer is special though, in that it will deserialize to a dynamic if the cannot find the .NET type that it was supposed to deserialize into.
This means that this handler:
public class HandlesEverything : IHandleMessages<object>
{
public async Task Handle(dynamic message)
{
// .... woohoo!
}
}
coupled with the Jil serializer will be able to handle all messages, dynamically, picking out whichever pieces it is interested in.
I hope the answer gives an impression of some of the possibilities with Rebus. Please tell me more if there's a scenario that you feel is not somehow covered well.
I have several variables that I need to send from page to page...
What is the best way to do this?
Just send them one by one:
string var1 = Session["var1"] == null ? "" : Session["var1"].ToString();
int var2 = Session["var2"] == null ? 0 : int.Parse(Session["var2"].ToString());
and so on...
Or put them all in some kind of container-object?
struct SessionData
{
public int Var1 { get; set; }
public string Var2 { get; set; }
public int Var3 { get; set; }
}
--
SessionData data = Session["data"] as SessionData;
What is the best solution? What do you use?
A hybrid of the two is the most maintainable approach. The Session offers a low-impedance, flexible key-value pair store so it would be wasteful not to take advantage of that. However, for complex pieces of data that are always related to each other - for example, a UserProfile - it makes sense to have a deeply nested object.
If all the data that you're storing in the Session is related, then I would suggest consolodating it into a single object like your second example:
public class UserData
{
public string UserName { get; set; }
public string LastPageViewed { get; set; }
public int ParentGroupId { get; set; }
}
And then load everything once and store it for the Session.
However, I would not suggest bundling unrelated Session data into a single object. I would break each seperate group of related items into their own. The result would be something of a middleground between the two hardline approaches you provided.
I use a SessionHandler, which is a custom rolled class that looks like this
public static class SessionHandler
{
public static string UserId
{
get
{
return Session["UserId"];
}
set
{
Session["UserId"] = value;
}
}
}
And then in code I do
var user = myDataContext.Users.Where(u => u.UserId = SessionHandler.UserId).FirstOrDefault();
I don't think I've every created an object just to bundle other objects for storage in a session, so I'd probably go with the first option. That said, if you have such a large number of objects that you need to bundle them up to make it easier to work with, you might want to re-examine your architecture.
I've used both. In general, many session variable names leads to a possibility of collisions, which makes collections a litte more reliable. Make sure the collection content relates to a single responsibility, just as you would for any object. (In fact, business objects make excellent candidates for session objects.)
Two tips:
Define all session names as public static readonly variables, and make it a coding standard to use only these static variables when naming session data.
Second, make sure that every object is marked with the [Serializable] attribute. If you ever need to save session state out-of-process, this is essential.
The big plus of an object: properties are strongly-typed.