I have the following implementation using basic UtcNow, and then converting it back to the UI.
The purpose in this scenario is very simple.
When I insert a new client to the database, I convert it to UtcNow, so I have a timestamp of this record.
When I show it back to the user, I convert it to Local (eg: You are member since "dd/MM/yyyy").
Here is my implementation. I would like to know how do I get this done through NodaTime. And also, if anyone has a better approach for doing it, I'll be glad to hear!
public interface IDateTime
{
DateTime ToUtcNow { get; }
DateTime FromUtc(DateTime dateTimeToConvert);
}
public class DateTimeProvider : IDateTime
{
public DateTime ToUtcNow
{
get { return DateTime.UtcNow; }
}
public DateTime FromUtc(DateTime dateTimeToConvert)
{
return dateTimeToConvert.ToLocalTime();
}
}
Also, how should I save it to database, in this case Sql Server. What data type should I use? DateTime, DateTimeOffset? What about in code? OffsetDateTime, ZonedDateTime I am very confused about that. I have read the userguide, but it shows more complex scenarios that I'am not getting to implement it in such a simple scenario.
Thanks in advance!
It sounds like you're trying to store points in time, regardless of calendar system or time zone. Those are represented by Instant values in Noda Time. So within most of your code, that's probably what you should use to represent the timestamp. (That's also what IClock.Now will give you.)
However, your database is unlikely to know about Noda Time... I would suggest that you should probably store a DateTimeOffset, which you can obtain via Instant.ToDateTimeOffset (and there's Instant.FromDateTimeOffset too). That will always have an offset of 0, but at least then it's not ambiguous about which point in time it represents.
Related
When writing a C# console App, and using System.Data.SQLite, I am able to perform SQL commands such as:
string cosfun = string.Format("UPDATE test SET cosColumn = column1*cos(20));
However, when I try using a similar command in Xamarin.Forms, using the sqlite-net-pcl package, I get the followin error: SQLite.SQLiteException: 'no such function: cos'
I have found a similar question on SO (Custom SQLite functions in Xamarin.iOS). However, I didn't fully understand the response. I now have the following questions:
1) Can I make custom SQL functions using sqlite-net-pcl in Xamarin.Forms? If so, could someone please share a simple (but complete) example of how to do this?
2) Is there anyway for me to access the same math functions (pow, cos, sin, etc.) that I can access when writing console Apps in C#?
3) Is there another way to do this? For example, can I read columns from the database into a List, then perform the required math functions, and feed that back into the database? Would this be a terrible idea with a large database?
Thanks in advance,
Dustin
First is OK.
The SQLite-net PCL by Frank Kreuger is the one that Xamarin University uses in their XAM160 - Working with SQLite and Mobile Data class: https://university.xamarin.com/classes/track/cross-platform-design
Second is Ok.
You can find some documentation on how to get started on the Xamarin developer site: http://developer.xamarin.com/recipes/android/data/databases/sqlite/
Third answer is clear.
More Info:
You can refer to official document in here, Another similar discussion may be helpful for you this.
Correct me if I'm wrong, but what you're trying to do is essentially have two columns where one contains a set of data, and the other contains the result of a simple mathematical operation from the first column. From this you have two columns where one is dependent on the other, which means you are occupying double the necessary memory space. For a 100 entries, that's alright. For 1,000,000? Less so.
I personally thing you are better off not having cosColumn, and you should calculate the cosine when you read the data. For example:
// In your C# code...
public class MyData
{
public double Column1 { get; set; } = 0.0;
public double Cosine => Math.Cos(Column1);
}
In the above, the cosine value is never stored or created in neither C# or SQLite, but it is obtained only when needed. This makes it much more memory-friendly to the SQLite table, and it implements a better SQLite structure.
In the code above, the line:
public double Cosine => Math.Cos(Column1);
is exactly equivalent to:
public double Cosine
{
get
{
return Math.Cos(Column1);
}
}
There's no real difference between the two, and you save a lot of line-space. You can find more information on the => notation from this StackOverflow answer by Alex Booker.
Let's go through an example of implementing this structure. Suppose you have a database with 1 column with the name Column1, and you want to apply a Cosine function to this value and display it. Your code might look like:
// Read from database object of type MyData
MyData data = ReadOneValueFromDatabase<MyData>();
// Display values in a label
MyValueLabel.Text = "Database value: " + data.Column1.ToString();
MyCosineLabel.Text = "Cosine value: " + data.Cosine.ToString();
The object data will store the value of Column1 from the database in Column1, but not Cosine. The value of Cosine is only obtained when you call data.Cosine.
I am having an issue with the firebase.firestore.Timestamp class.
I am working on an Angular 6 app with firestore. I have been using Timestamps with no issues for a while, but I am in the process of moving some of my client-side services to cloud functions.
The issue is this:
When I perform a write operation directly from the client side, such as this:
const doc = { startTimestamp: firebase.firestore.Timestamp.fromDate(new Date()) };
firebase.firestore().doc('some_collection/uid).set(doc);
The document gets written correctly to firestore as a Timestamp.
However, when I send doc to a cloud function, then perform the write from the function, it gets written as a map, not a timestamp. Similarly, if I use a JS Date() object instead of a firestore.Timestamp, it gets written correctly from the client side but written as a string from the cloud function.
This makes sense given that the documents is just JSON in the request.body, I guess I was just hoping that firestore would be smart enough to implicitly handle the conversion.
For now, I have a workaround that just manually converts the objects into firestore.Timestamps again in the cloud function, but I am hoping that there is a more effective solution, possibly something buried in the SDK that I have not been able to find.
Has anyone else come across and, ideally, found a solution for this?
I can provide more code samples if needed.
The behavior you're observing is expected. The Firestore client libraries have a special interpretation of Timestamp type objects, and they get converted to a Timestamp type field in the database when written. However, if you try to serialize a Timestamp objects as JSON, you will just get an object with the timestamp's milliseconds and nanoseconds components. If you want to send these timestamp components to Cloud Functions or some other piece of software, that's fine, but that other piece of software is going to have to reconstitute a real Timestamp object from those parts before writing to Firestore with the Admin SDK or whatever SDK you're using to deal with Firestore.
in your class model, use the
#ServerTimestamp var timestamp: Date? = null
or
#ServerTimestamp Date timestamp = null
You can leave out initializing timestamp from your code, say new Date()
Example:
#IgnoreExtraProperties
data class ProductItem(
var userId: String? = "",
var avgRating: Double = 0.toDouble(),
#ServerTimestamp var timestamp: Date? = null
)
or
public class ProductItem() {
String userId;
Double avgRating;
#ServerTimestamp Date timestamp;
}
I'm trying to implement simple DDD/CQRS architecture without event-sourcing for now.
Currently I need to write some code for adding a notification to a document entity (document can have multiple notifications).
I've already created a command NotificationAddCommand, ICommandService and IRepository.
Before inserting new notification through IRepository I have to query current user_id from db using NotificationAddCommand.User_name property.
I'm not sure how to do it right, because I can
Use IQuery from read-flow.
Pass user_name to domain entity and resolve user_id in the repository.
Code:
public class DocumentsCommandService : ICommandService<NotificationAddCommand>
{
private readonly IRepository<Notification, long> _notificationsRepository;
public DocumentsCommandService(
IRepository<Notification, long> notifsRepo)
{
_notificationsRepository = notifsRepo;
}
public void Handle(NotificationAddCommand command)
{
// command.user_id = Resolve(command.user_name) ??
// command.source_secret_id = Resolve(command.source_id, command.source_type) ??
foreach (var receiverId in command.Receivers)
{
var notificationEntity = _notificationsRepository.Get(0);
notificationEntity.TargetId = receiverId;
notificationEntity.Body = command.Text;
_notificationsRepository.Add(notificationEntity);
}
}
}
What if I need more complex logic before inserting? Is it ok to use IQuery or should I create additional services?
The idea of reusing your IQuery somewhat defeats the purpose of CQRS in the sense that your read-side is supposed to be optimized for pulling data for display/query purposes - meaning that it can be denormalized, distributed etc. in any way you deem necessary without being restricted by - or having implications for - the command side (a key example being that it might not be immediately consistent, while your command side obviously needs to be for integrity/validity purposes).
With that in mind, you should look to implement a contract for your write side that will resolve the necessary information for you. Driving from the consumer, that might look like this:
public DocumentsCommandService(IRepository<Notification, long> notifsRepo,
IUserIdResolver userIdResolver)
public interface IUserIdResolver
{
string ByName(string username);
}
With IUserIdResolver implemented as appropriate.
Of course, if both this and the query-side use the same low-level data access implementation (e.g. an immediately-consistent repository) that's fine - what's important is that your architecture is such that if you need to swap out where your read side gets its data for the purposes of, e.g. facilitating a slow offline process, your read and write sides are sufficiently separated that you can swap out where you're reading from without having to untangle reads from the writes.
Ultimately the most important thing is to know why you are making the architectural decisions you're making in your scenario - then you will find it much easier to make these sorts of decisions one way or another.
In a project i'm working i have similar issues. I see 3 options to solve this problem
1) What i did do is make a UserCommandRepository that has a query option. Then you would inject that repository into your service.
Since the few queries i did need were so simplistic (just returning single values) it seemed like a fine tradeoff in my case.
2) Another way of handling it is by forcing the user to just raise a command with the user_id. Then you can let him do the querying.
3) A third option is ask yourself why you need a user_id. If it's to make some relations when querying the data you could also have this handles when querying the data (or when propagating your writeDB to your readDB)
I need to check memory continuously for a change to notify, and I use System.Threading.Timer to achieve it. I want the notification ASAP, so I need to tun callback method quite often, and I don't want cpu to use 100% to do this.
Can anybody tell me how should I set the interval of this timer? (I think it would be good to set it minimum as possible as)
Thanks
OK, so there is a very basic strategy for how you can be immediately notified of a modification to the dictionary without incurring any necessary CPU cycles and it involves using Monitor.Wait and Monitor.Pulse/Monitor.PulseAll.
On a very basic level, you have something like this:
public Dictionary<long, CometMessage> Messages = new Dictionary<long, CometMessage>();
public void ModifyDictionary(int key, CometMessage value)
{
Messages[key] = value;
Monitor.PulseAll(Messages);
}
public void CheckChanges()
{
while(true)
{
Monitor.Wait(Messages);
// The dictionary has changed!
// TODO: Do some work!
}
}
Now, this is very rudimentary and you could get all sorts of synchronization issues (read/write), so you should look into Marc Gravell's implementation of a blocking queue and apply the same logic to your dictionary (essentially making a blocking dictionary).
Furthermore, the above example will only let you know when the dictionary is modified, but it will not inform you of WHICH element was modified. It's probably better if you take the basics from above and design your system so you know which element was last modified by perhaps storing the key (e.g. last key) and just checking the value associated with it.
Does anybody know of a Windows tool to report fake dates/times to a process?
Apparently there are Linux programs that can be used to test how the software will react in the future / in a different timezone or to trigger scheduled tasks without actually modifying the system clock. Are there such programs for Windows?
RunAsDate can do this.
Starting about 2 years ago, I always abstract the call to DateTime.Now (C#) through a Utility class. This way I can always fake the date/time if I want to, or just pass it directly through to DateTime.Now.
Sounds similiar to what BillH suggested.
public class MyUtil
{
public static DateTime GetDateTime()
{
return DateTime.Now.AddHours(5);
//return DateTime.Now;
}
}
Wrapper your calls to the 'getCurrentDateTime()' system calls so you can introduce offsets or multipliers into the times your code reads during testing?