I have a C++/CX app that is processing some data from a file. It has a string in there representing the culture that was used to save the dates, and it has some dates. I need to convert them from strings to Platform::DateTime. I have heard that Windows::Globalization::DateTimeFormatting is the class to use, but I don't see how to use it for that. Does anyone have an example?
The C++/CX projection of WinRT differs from the Managed (C#/VB) projection in a number of ways, and one of the most major is in the projection of fundamental types (such as Point, Size, String, and DateTime).
The managed projection projects these types as .NET types (with all the underlying support of the BCL) while the C++ projection generally minimally projects these types for interop, expecting the user to rely on C++ library support for more advanced functionality.
So, where in .NET a signed 32-bit integer becomes a System.Int32 (with its relevant .Parse functionality) in C++ you get a regular C++ int and are expected to use CRT functionality (_wtoi) to accomplish a similar task. This difference often results in a 'feature gap' between the different projections, one of the more painful of which is in dealing with the DateTime structure (which has very rich support in the BCL).
The solution I've managed to get was to start with the COleDateTime class (found by including ATLComTime.h) and going COleDateTime->SYSTEMTIME->FILETIME->_ULARGE_INTEGER->Windows::Foundation::DateTime. It's serious gymnastics, but the COleDateTime class has the language-specific parsing capability that you require.
LCID lcid = LocaleNameToLCID(L"es-es", LOCALE_ALLOW_NEUTRAL_NAMES); //parse language tag to get locale ID
COleDateTime dt;
dt.ParseDateTime(L"12 enero, 2012 10:00:01", 0, lcid); //parse date string based on language
//get system time struct
SYSTEMTIME st;
dt.GetAsSystemTime(st);
//COleDateTime is in local timezone, DateTime is in UTC, so we need to convert
SYSTEMTIME st_utc;
TzSpecificLocalTimeToSystemTime(nullptr, &st, &st_utc);
//get filetime struct to get a time format compatible with DateTime
FILETIME ft;
SystemTimeToFileTime(&st_utc, &ft);
//use _ULARGE_INTEGER to get a uint64 to set the DateTime struct to
_ULARGE_INTEGER ulint = {ft.dwLowDateTime, ft.dwHighDateTime};
Windows::Foundation::DateTime wfdt;
wfdt.UniversalTime = ulint.QuadPart;
I've asked around about the DateTimeFormatter class, and the documentation is incorrect; it does not support parsing and is not intended to (only formatting).
Would std::get_time do the trick? The example at the bottom of the page suggests you can parse a date string using a locale into a tm struct. I imagine you could then convert the tm struct into the correct WinRT DateTime.
I use this code:
auto cal = ref new Windows::Globalization::Calendar();
cal->AddSeconds(additionalSeconds);
return cal->GetDateTime();
With Windows::Globalization::Calendar, you have all the nice time functions you need: https://msdn.microsoft.com/library/windows/apps/br206724
Related
We have been using PubSubLite in our Go program without any issues and I just started using the Java library with Beam.
Using the PubSubLite IO, we get PCollection of SequencedMessage specifically: https://cloud.google.com/java/docs/reference/google-cloud-pubsublite/latest/com.google.cloud.pubsublite.proto.SequencedMessage
Now, from it I can get the data by doing something like:
message.getMessage().getData().toByteArray()
and then doing the normal conversion.
But for attributes, I cannot seem to get it correctly, just the value. In Go, I could do:
msg.Attributes["attrKey"]
but when I do:
message.getMessage().getAttributesMap().get("attrKey")
I am getting an Object which I cannot seem to convert to just string value of it. As far as I understand, it returns a Map<String, AttributeValues> and they all seem to be just wrapper over the internal protobuf. Also, Map is an interface so how do I get to the actual implementation to get the underlying value of each of the attribute.
The SequencedMessage attributes represent a multimap of string to bytes, not a map of string to string like in standard Pub/Sub. In the go client, by default the client will error if there are multiple values for a given key or if any of the values is not valid UTF-8, and thus presents a map[string]string interface.
When you call message.getMessage().getAttributesMap().get("attrKey"), you have a value of type AttributeValues which is a holder for a list of ByteStrings. To convert this to a single String, you would need to throw if the list is not of length 1, then call toStringUtf8 on the byte string element with index 0.
If you wish to interact with the standard Pub/Sub message format like you would in go, you can convert to this format by doing:
import org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage;
import org.apache.beam.sdk.io.gcp.pubsublite.CloudPubsubTransforms;
PCollection<SequencedMessage> messages = ...
PCollection<PubsubMessage> transformed = messages.apply(CloudPubsubTransforms.toCloudPubsubMessages());
Values that can be converted to a JSON string via json.dumps are:
Scalars: Numbers and strings
Containers: Mapping and Iterable
Union[str, int, float, Mapping, Iterable]
Do you have a better suggestion?
Long story short, you have the following options:
If you have zero idea how your JSON is structured and must support arbitrary JSON blobs, you can:
Wait for mypy to support recursive types.
If you can't wait, just use object or Dict[str, object]. It ends up being nearly identical to using recursive types in practice.
If you don't want to constantly have to type-check your code, use Any or Dict[str, Any]. Doing this lets you avoid needing to sprinkle in a bunch of isinstance checks or casts at the expense of type safety.
If you know precisely what your JSON data looks like, you can:
Use a TypedDict
Use a library like Pydantic to deserialize your JSON into an object
More discussion follows below.
Case 1: You do not know how your JSON is structured
Properly typing arbitrary JSON blobs is unfortunately awkward to do with PEP 484 types. This is partly because mypy (currently) lacks recursive types: this means that the best we can do is use types similar to the one you constructed.
(We can, however, make a few refinements to your type. In particular, json.Dumps(...) actually does not accept arbitrary iterables. A generator is a subtype of Iterable, for example, but json.dumps(...) will refuse to serialize generators. You probably want to use something like Sequence instead.)
That said, having access to recursive types may not end up helping that much either: in order to use such a type, you would need to start sprinkling in isinstance checks or casts into your code. For example:
JsonType = Union[None, int, str, bool, List[JsonType], Dict[JsonType]]
def load_config() -> JsonType:
# ...snip...
config = load_config()
assert isinstance(config, dict)
name = config["name"]
assert isinstance(name, str)
So if that's the case, do we really need the full precision of recursive types? In most cases, we can just use object or Dict[str, object] instead: the code we write at runtime is going to be nearly the same in either case.
For example, if we changed the example above to use JsonType = object, we would still end up needing both asserts.
Alternatively, if you find sprinkling in assert/isinstance checks to be unnecessary for your use case, a third option is to use Any or Dict[str, Any] and have your JSON be dynamically typed.
It's obviously less precise than the options presented above, but asking mypy to not type check uses of your JSON dict and relying on runtime exceptions instead can sometimes be more ergonomic in practice.
Case 2: You know how your JSON data will be structured
If you do not need to support arbitrary JSON blobs and can assume it forms a particular shape, we have a few more options.
The first option is to use TypedDicts instead. Basically, you construct a type explicitly specifying what a particular JSON blob is expected to look like and use that instead. This is more work to do, but can let you gain more type-safety.
The main disadvantage of using TypedDicts is that it's basically the equivalent of a giant cast in the end. For example, if you do:
from typing import TypedDict
import json
class Config(TypedDict):
name: str
env: str
with open("my-config.txt") as f:
config: Config = json.load(f)
...how do we know that my-config.txt actually matches this TypedDict?
Well, we don't, not for certain.
This can be fine if you have full control over where the JSON is coming from. In this case, it might be fine to not bother validating the incoming data: just having mypy check uses of your dict is good enough.
But if having runtime validation is important to you, your options are to either implement that validation logic yourself or use a 3rd party library that can do it on your behalf, such as Pydantic:
from pydantic import BaseModel
import json
class Config(BaseModel):
name: str
env: str
with open("my-config.txt") as f:
# The constructor will raise an exception at runtime
# if the input data does not match the schema
config = Config(**json.load(f))
The main advantage of using these types of libraries is that you get full type safety. You can also use object attribute syntax instead of dict lookups (e.g. do config.name instead of config["name"]), which is arguably more ergonomic.
The main disadvantage is doing this validation does add some runtime cost, since you're now scanning over the entire JSON blob. This might end up introducing some non-trivial slowdowns to your code if your JSON happens to contain a large quantity of data.
Converting your data into an object can also sometimes be a bit inconvenient, especially if you plan on converting it back into a dict later on.
There has been a lengthy discussion (https://github.com/python/typing/issues/182) about the possibility of introducing a JSONType; however, no definitive conclusion has yet been reached.
The current suggestion is to just define JSONType = t.Union[str, int, float, bool, None, t.Dict[str, t.Any], t.List[t.Any]] or something similar in your own code.
The title is obvious, I need to know if methods are serialized along with object instances in C#, I know that they don't in Java but I'm a little new to C#. If they don't, do I have to put the original class with the byte stream(serialized object) in one package when sending it to another PC? Can the original class be like a DLL file?
No. The type information is serialized, along with state. In order to deserialize the data, your program will need to have access to the assemblies containing the types (including methods).
It may be easier to understand if you've learned C. A class like
class C
{
private int _m;
private int _n;
int Meth(int p)
{
return _m + _n + p;
}
}
is essentially syntactic sugar for
typedef struct
{
int _m;
int _n;
// NO function pointers necessary
} C;
void C_Meth(C* obj, int p)
{
return obj->_m + obj->_n + p;
}
This is essentially how non-virtual methods are implemented in object-oriented languages. The important thing here is that methods are not part of the instance data.
Methods aren't serialized.
I don't know about your scenario, but putting in a library (assembly / dll) and using that in the other end to deserialize gets you all.
Ps. you probably should create some ask some more questions with the factors involved in your scenario. If you are intending to dynamically send & run the code, you can create awful security consequences.
I was confused when .NET first came up with serialization. I think it came from the fact that most books and guides mention that it allows you to serialize your 'objects' as XML and move them around, the fact is that you are actually hydrating the values of your object so you can dehydrate them latter. at no point your are saving your whole object to disk since that would require the dll and is not contained in the XML file.
I am using WDDX to to store a ColdFusion struct in a database, and I would like to maintain the pointers. Here's an example (sorry, the shorthand notation may be full of errors b/c I hardly ever use it):
tshirt={color={selected="red",options=["red","blue","yellow","white"]}};
tshirt.front= {colors=tshirt.color,design="triangle",ink="green"};
tshirt.back= {color=tshirt.color,design="square",ink="black"};
Right now, tshirt.front.color, tshirt.back.color and tshirt.color are all pointers to the same struct. If I change tshirt.color.selected to "blue", tshirt.back.color.selected and tshirt.front.color.selected will also be "blue".
However, suppose I WDDX tshirt and then unWDDX it. When I change tshirt.color.selected to "white", it is not changed in tshirt.front.color.selected or tshirt.back.color.selected.
Can anyone suggest another way to serialize and unserialize data that would preserve the pointers?
Just a few links that I've been using to research so far:
http://blog.adamcameron.me/2013/01/random-unsuccessful-experiment.html
http://www.simonwhatley.co.uk/the-inner-workings-of-a-coldfusion-array-and-structure
Use ObjectSave(), new in CF9:
Description
Converts a ColdFusion array, CFC, DateTime object, Java object, query,
or structure into a serializable binary object and optionally saves
the object in a file.
Returns
A serializable binary representation of the object.
<cfscript>
shirtdata = objectSave(tshirt);
tshirt2 = objectLoad(shirtdata);
tshirt2.color.selected = "blue";
writeOutput(tshirt2.front.colors.selected); // "blue" => reference kept
</cfscript>
Live Demo: http://www.trycf.com/scratch-pad/pastebin?id=L0g211aD
Support for reflection has been currently added into F#, but it is not working for measure types. Is it possible to use reflection in F# for measure types?
I've read this. It was for 2008, but if you check some code like bellow in ildasm you cannot see anything about Units of Measure.
// Learn more about F# at http://fsharp.net
[<Measure>] type m
[<Measure>] type cm
let CalculateVelocity(length:float<m> ,time:float<cm>) =
length / time
The ildasm output:
.method public static float64 CalculateVelocity(float64 length,
float64 time) cil managed
{
// Code size 5 (0x5)
.maxstack 4
IL_0000: nop
IL_0001: ldarg.0
IL_0002: ldarg.1
IL_0003: div
IL_0004: ret
} // end of method Program::CalculateVelocity
So there are somethings that cannot be reflected in F#. Is it true or not?
see the comment : Units actually don't get seen at all by the CLR ... in the article.
As others already pointed out, when you need to get some information about compiled F# types, you can use standard .NET reflection (System.Reflection) and F# reflection which provides information about discriminated unions, records, etc. (Microsoft.FSharp.Reflection).
Unfortunatelly, information about units of measure cannot be accessed using any of these two APIs, because they are checked only during the compilation and do not actually exist at runtime (they cannot be represented in the CLR in any way). This means that you'll never be able to find out whether e.g. a boxed floating point value has some unit of measure...
You can get some information about units of measure using Metadata namespace from F# PowerPack. For example, the following prints that foo is a unit:
namespace App
open System.Reflection
open Microsoft.FSharp.Metadata
[<Measure>]
type foo
module Main =
let asm = FSharpAssembly.FromAssembly(Assembly.GetExecutingAssembly())
for ent in asm.Entities do
if ent.IsMeasure then
printfn "%s is measure" ent.DisplayName
This reads some binary metadata that the compiler stores in compiled files (so that you can see units when you reference other F# libraries), so you should be able to see informaiton about public API of F# libraries.
Units of Measure is just a compile-time thing, it doesn't exist in the assembly/CLR.
From part one:
Units-of-measure are not just handy
comments-on-constants: they are there
in the types of values, and, what's
more, the F# compiler knows the rules
of units.
You can:
.NET and F# Reflection
The F# library also extends the .NET System.Reflection to give additional information about F# data types
Source