typescript, ts-morph module, is there access to underlying `ts.SourceFile` instance from a `ts-morph.SourceFile` instance? - typescript-compiler-api

I am trying out the ts-morph npm module to replace some code which I have already written but which overlaps ts-morph and is inferior. Nevertheless, I have some existing functions that take an ts.Node type arguments, mostly for exploration and discovery, which I need to use for reference while trying out ts-morph.
However, I can't see a way to access the underlying ts.Node instance from a ts-morph.SourceFile instance - there are no ts-morph functions with return type of ts.Node or ts.TypeChecker.
This doesn't work
(sourceFile as unknown) as ts.SourceFile,
(checker as unknown) as ts.TypeChecker,
because, for starters, (sourceFile as unknown) as ts.SourceFile doesn't have a kind member.
Is there a way access the underlying ts.Node instance from, e.g., ts-morph.SourceFile?

ts-morph provides access to all the underlying compiler API objects it wraps.
For any node, you can access the underlying compiler node using the compilerNode property:
sourceFile.compilerNode // ts.SourceFile
Note though that the underlying compiler node will become out of date whenever the source file is manipulated via ts-morph (ex. you add a class to the source file, remove a function, or other stuff like that).
https://github.com/dsherret/ts-morph/blob/af35677f3b498ed0f8e87e4b6c92a7246cfab210/packages/ts-morph/lib/ts-morph.d.ts#L3189
To get the TypeScript TypeChecker, use the compilerObject property on TypeChecker:
project.getTypeChecker().compilerObject // ts.TypeChecker

Related

Kotlin: How are a Delegate's get- and setValue Methods accessed?

I've been wondering how delegated properties ("by"-Keyword) work under-the-hood. I get that by contract the delegate (right side of "by") has to implement a get and setValue(...) method, but how can that be ensured by the compiler and how can those methods be accessed at runtime? My initial thought was that obviously the delegates must me implementing some sort of "SuperDelegate"-Interface, but it appears that is not the case. So the only option left (that I am aware of) would be to use Reflection to access those methods, possibly implemented at a low level inside the language itself. I find that to be somewhat weird, since by my understanding that would be rather inefficient. Also the Reflection API is not even part of the stdlib, which makes it even weirder.
I am assuming that the latter is already (part of) the answer. So let me furthermore ask you the following: Why is there no SuperDelegate-Interface that declare the getter and setter methods that we are forced to use anyway? Wouldn't that be much cleaner?
The following is not essential to the question
The described Interface(s) are even already defined in ReadOnlyProperty and ReadWriteProperty. To decide which one to use could then be made dependable on whether we have a val/var. Or even omit that since calling the setValue Method on val's is being prevented by the compiler and only use the ReadWriteProperty-Interface as the SuperDelegate.
Arguably when requiring a delegate to implement a certain interface the construct would be less flexible. Though that would be assuming that the Class used as a Delegate is possibly unaware of being used as such, which I find to be unlikely given the specific requirements for the necessary methods. And if you still insist, here's a crazy thought: Why not even go as far as to make that class implement the required interface via Extension (I'm aware that's not possible as of now, but heck, why not? Probably there's a good 'why not', please let me know as a side-note).
The delegates convention (getValue + setValue) is implemented at the compiler side and basically none of its resolution logic is executed at runtime: the calls to the corresponding methods of a delegate object are placed directly in the generated bytecode.
Let's take a look at the bytecode generated for a class with a delegated property (you can do that with the bytecode viewing tool built into IntelliJ IDEA):
class C {
val x by lazy { 123 }
}
We can find the following in the generated bytecode:
This is the field of the class C that stores the reference to the delegate object:
// access flags 0x12
private final Lkotlin/Lazy; x$delegate
This is the part of the constructor (<init>) that initialized the delegate field, passing the function to the Lazy constructor:
ALOAD 0
GETSTATIC C$x$2.INSTANCE : LC$x$2;
CHECKCAST kotlin/jvm/functions/Function0
INVOKESTATIC kotlin/LazyKt.lazy (Lkotlin/jvm/functions/Function0;)Lkotlin/Lazy;
PUTFIELD C.x$delegate : Lkotlin/Lazy;
And this is the code of getX():
L0
ALOAD 0
GETFIELD C.x$delegate : Lkotlin/Lazy;
ASTORE 1
ALOAD 0
ASTORE 2
GETSTATIC C.$$delegatedProperties : [Lkotlin/reflect/KProperty;
ICONST_0
AALOAD
ASTORE 3
L1
ALOAD 1
INVOKEINTERFACE kotlin/Lazy.getValue ()Ljava/lang/Object;
L2
CHECKCAST java/lang/Number
INVOKEVIRTUAL java/lang/Number.intValue ()I
IRETURN
You can see the call to the getValue method of Lazy that is placed directly in the bytecode. In fact, the compiler resolves the method with the correct signature for the delegate convention and generates the getter that calls that method.
This convention is not the only one implemented at the compiler side: there are also iterator, compareTo, invoke and the other operators that can be overloaded -- all of them are similar, but the code generation logic for them is simpler than that of delegates.
Note, however, that none of them requires an interface to be implemented: the compareTo operator can be defined for a type not implementing Comparable<T>, and iterator() does not require the type to be an implementation of Iterable<T>, they are anyway resolved at compile-time.
While the interfaces approach could be cleaner than the operators convention, it would allow less flexibility: for example, extension functions could not be used because they cannot be compiled into methods overriding those of an interface.
If you look at the generated Kotlin bytecode, you'll see that a private field is created in the class holding the delegate you're using, and the get and set method for the property just call the corresponding method on that delegate field.
As the class of the delegate is known at compile time, no reflection has to happen, just simple method calls.

How to make Flow understand code written for Node.js?

I'm just getting started with Flow, trying to introduce it into an existing Node codebase.
Here are two lines Flow complains about:
import Module from 'module';
const nodeVersion = Number(process.versions.node.split('.')[0]);
The warnings about these lines are, respectively:
module. Required module not found
call of method `split`. Method cannot be called on possibly null value
So it seems like Flow isn't aware of some things that are standard in a Node environment (e.g. process.versions.node is guaranteed to be a string, and there is definitely a Node builtin called module).
But then again, Flow's configuration docs suggest it's Node-aware by default. And I have plenty of other stuff like import fs from 'fs'; which does not cause any warning. So what am I doing wrong?
Module fs works as expected because Flow comes with built-in definitions for it, see declare module "fs" here: https://github.com/facebook/flow/blob/master/lib/node.js#L624
Regarding process.versions.node, you can see in the same file that the versions key is typed as a map of nullable strings, with no mention of the specific node property: versions : { [key: string] : ?string };. So you'll need to either make a PR to improve this definition, or adjust your code for the possibility of that value being null.
I guess the answer about module "module" is obvious now – there are no built-in definitions for that module in Flow in lib/node.js. You could write your own definitions, and optionally send a PR with them to the Flow team. You can also try searching github for these, someone might have done the work already.
That lib directory is very useful by the way, it has Flow definitions for DOM and other stuff as well.

Unused gRPC ServerContext

I am new to gRPC and trying to use it in my existing system. However, I get this unused parameter error while compiling it.
server_grpc.cc:100:39: error: unused parameter ‘context’[-Werror=unused-parameter]
Status MyFunc(ServerContext* context, const QueryRequest* request,
Probably the context parameter is used in some other cases. But, in simple hello world type of example it is not used. Is there a way to compile the protocol buffer without generating the ServerContext parameter ?
I know I can make the compiler ignore warning messages. But, just wondering if it can be done without affecting the way my system is being compiled right now.
I would like to know how the context is used ? It would be great if anybody can give pointers to how to use this context. I might find a use of it in my work.
The ServerContext is provided to, well, add context for every RPC you get. It'll allow you to tweak certain aspects of the RPC, such as deal with authentication, or add metadata to your response back to the client. You may or may not need that parameter, obviously, depending on your needs.
We didn't want to add an option for this specifically, because that'd complexify the code and tool for little benefit, so the code generator and the function signature force you to have that parameter at all times. Now this isn't really a big deal, because in C++, you can specifically ask your compiler to ignore a parameter in a specific instance, for example with the following:
Status SayHello(ServerContext* context, const HelloRequest* request,
HelloReply* reply) override {
(void) context; // ignore that variable without causing warnings
std::string prefix("Hello ");
reply->set_message(prefix + request->name());
return Status::OK;
}
And that's how I'd suggest you to take care of that warning in that specific instance, without causing your whole project to not have warnings enabled.

Spawn object with custom script with Unity Network

I have a PlayerPref that spawn more objects in OnStartLocalPlayer().
So in OnStartLocalPlayer() it call Command(assume that called on server) that instantiate GameObject and setup some values of its scripts. At the end it calls SpanWithClientAuthority()...
The thing is that on owner client and on server those script tweeks are correct, but on all other clients it lost all that settings(ex. gameobject ref etc). What do I do wrong?
Once more in nutshell: playerPref GO must have ref list of several other objects, and those objects must have ref to that playerPref GO. (making them part of playerPref GO is not a solution).
If I understand your problem correctly, you need the references to be set across all clients who have the same game object. [Command]'s are for client to server. What you need is a [ClientRpc]. Make the OnStartLocalPlayer() call a [ClientRpc] function, in that function (ex: RpcSetRefs()) set the references you need each client to have.

Use and necessity of pthread_attr_init()

I just created one c program to create threads using POSIX thread library functions.I didn't use pthread_attr_init() function in that. Even my program works fine.So, what is the use of pthread_attr_init() and what does it do...? I am not familiar in thread concepts.Can anyone tell me is it compulsory to use pthread_attr_init() in thread concept program..?
pthread_attr_init is used to initialise a thread attributes structure, which can then be passed to pthread_create.
If you are creating threads with default attributes, you pass a NULL pointer for the thread attributes argument to pthread_init and there is no need to initialise an attribute structure.
However, if you want to configure specific thread attributes, such as scheduling policy, priority, concurrency level, then you must use pthread_attr_init to initialise the attribute structure before manipulating it using the attribute accessors functions (pthread_set... and pthread_get...) and passing it to the pthread_init function.

Resources