Scope of imported macro-generated code in Elixir - functional-programming

I am going through Saša Jurić's fantastic series on macros, and while running the following piece of code, I encountered something that confused me:
defmodule Plug.Router do
defmacro get(route, body) do
quote do
defp do_match("GET", unquote(route), var!(conn)) do
unquote(body[:do])
end
end
end
end
defmodule MyRouter do
import Plug.Router
def match(type, route) do
do_match(type, route, :dummy_connection)
end
get "/hello", do: {conn, "Hi!"}
get "/goodbye", do: {conn, "Bye!"}
MyRouter.match("GET", "/hello") |> IO.inspect
MyRouter.match("GET", "/goodbye") |> IO.inspect
end
My questions are:
in which module is the do_match/3 function injected? Is it in the Plug.Router module, or in the MyRouter one? Further down in the article, it's mentioned that use and require inject the code in the caller's module. Is this valid for imported macros too?
where in Elixir's code should I look for the implementation of this behaviour?
is there an easy way of inspecting the structure of a module after expansion? Something equivalent to Macro.to_string/3, but for Modules?

The functions generated by the macro will be defined in MyRouter
If you require a module, its macros will be made available to the caller module. If you import a module, its macros and functions will additionally be accessible without having to use the module name as prefix. See the getting started guide on alias, require and import for more details.
require and import are quite low level features of Elixir, so they are implemented in Erlang directly. A fair amount of the import and require logic can be found here.
I am not aware of a way of introspecting these things, since macros basically leave no trace in the compiled code. It's part of the beauty of macros that they will not introduce any additional overhead to your code. However, you can check which functions are in a module after macro expansion with MyRouter.__info__(:functions)

Related

Create interface wrapping existing types with pointer-receiver methods

I need to test an app which uses Google Cloud Pubsub, and so must wrap its types pubsub.Client and pubsub.Subscriber for testing purposes. However, despite several attempts I can't get an interface around them which compiles.
The definitions of the methods I'm trying to wrap are:
func (s *Subscription) Receive(
ctx context.Context, f func(context.Context, *Message)) error
func (c *Client) Subscription(id string) *Subscription
Here is the current code. The Receiver interface (wrapper around Subscriber) seems to work, but I suspect it may need to change in order to fix SubscriptionMaker, so I've include both.
Note: I've tried several variations of where to reference and dereference pointers, so please don't tell me to change that unless you have an explanation of why your suggested configuration is the correct one or you've personally verified it compiles.
import (
"context"
"cloud.google.com/go/pubsub"
)
type Receiver interface {
Receive(context.Context, func(ctx context.Context, msg *pubsub.Message)) (err error)
}
// Pubsub subscriptions implement Receiver
var _ Receiver = &pubsub.Subscription{}
type SubscriptionMaker interface {
Subscription(name string) (s Receiver)
}
// Pubsub clients implement SubscriptionMaker
var _ SubscriptionMaker = pubsub.Client{}
Current error message:
common_types.go:21:5: cannot use "cloud.google.com/go/pubsub".Client literal (type "cloud.google.com/go/pubsub".Client) as type SubscriptionMaker in assignment:
"cloud.google.com/go/pubsub".Client does not implement SubscriptionMaker (wrong type for Subscription method)
have Subscription(string) *"cloud.google.com/go/pubsub".Subscription
want Subscription(string) Receiver
First, for most uses, using the ptest package is probably a much easier approach for testing pubsub. But of course, your specific question can apply to any library, and the below approach can be useful for many things, not just mocking pubsub.
Your broader goal of using interfaces to mock a library like this, is doable. But it is complicated when the library you wish to mock out returns concrete types that you cannot mock (probably due to unreported fields). The approach to be taken is much more involved than is often worth it, as there may be easier ways to test your code.
But if you're intent on doing this, the approach you must take is to not wrap the entire package in interfaces, not just the specific methods you wish to mock.
You would need to wrap any types that you wish to mock which are returned by or accepted by your interface, too. This usually means you also need to modify your production code (not just your test code), so this can sometimes be a deal-breaker for existing code bases.
Where I have usually done this before is when mocking something like the standard library's sql driver, but the same approach can be applied here. In essence, you would need to create a wrapper package for your pubsub library, which you use even in your production code. Again, this can be quite intrusive on existing codebases, but for the sake of illustration. Using your defined interfaces:
package mypubsub
import "cloud.google.com/go/pubsub"
type Receiver interface {
Recieve(context.Context, func(context.Context, *pubsub.Message) error)
}
type SubscriptionMaker interface {
Subscription(string) Receiver
}
You can then wrap the default implementation, for use in production code:
// defaultClient wraps the default pubsub Client functionality.
type defaultClient struct {
*pubsub.Client
}
func (d defaultImplementation) Subscription(name string) Receiver {
return d.Client.Subscription()
}
Naturally, you'd need to expand this package to wrap most or all of the pubsub package you're using. This can be a bit daunting.
But once you've done that, then use your mypubsub package everywhere in your code, instead of directly depending on the pubsub package. And now you can easily swap out a mock implementation anywhere you need for testing.
It can't be done.
When defining the type signature of a method on an interface, it must match exactly. func (c *Client) Subscription(id string) *Subscription returns a *Subscription, and a *Subscription is a valid Receiver, but it does not count as conforming to the interface method Subscription(string) Receiver. Go requires precise matching for function signatures, not the duck-typing style that it usually uses for interfaces.

How to make Flow understand code written for Node.js?

I'm just getting started with Flow, trying to introduce it into an existing Node codebase.
Here are two lines Flow complains about:
import Module from 'module';
const nodeVersion = Number(process.versions.node.split('.')[0]);
The warnings about these lines are, respectively:
module. Required module not found
call of method `split`. Method cannot be called on possibly null value
So it seems like Flow isn't aware of some things that are standard in a Node environment (e.g. process.versions.node is guaranteed to be a string, and there is definitely a Node builtin called module).
But then again, Flow's configuration docs suggest it's Node-aware by default. And I have plenty of other stuff like import fs from 'fs'; which does not cause any warning. So what am I doing wrong?
Module fs works as expected because Flow comes with built-in definitions for it, see declare module "fs" here: https://github.com/facebook/flow/blob/master/lib/node.js#L624
Regarding process.versions.node, you can see in the same file that the versions key is typed as a map of nullable strings, with no mention of the specific node property: versions : { [key: string] : ?string };. So you'll need to either make a PR to improve this definition, or adjust your code for the possibility of that value being null.
I guess the answer about module "module" is obvious now – there are no built-in definitions for that module in Flow in lib/node.js. You could write your own definitions, and optionally send a PR with them to the Flow team. You can also try searching github for these, someone might have done the work already.
That lib directory is very useful by the way, it has Flow definitions for DOM and other stuff as well.

Elixir - Changing Behavior

This is mostly a Functional Programming question rather than an Elixir one, but since I'm learning Elixir it would be nice if someone can answer it using that language. Even so, if someone wants to give a more general answer it'll be appreciated.
I'm an OO programmer myself and I can't wrap my head around how to change the behavior of a component based on a configuration file (for example).
Example:
I have an application that loads/saves users from a database. In a production environment, I want my users to be saved and retrieved from a MongoDB database, while in development and testing I want to use an in-memory map. If I was programming given system in an OO language (Lets say Java), I would simply make an Interface named "UserRepository" with 2 implementations: "MemoryUserRepository" and "MongoDBUserRepository". I would then instantiate the corresponding Repository based on a configuration file (or hardcoding it, it doesn't matter) at startup and right after it, all the objects that interact with the Repository will never know its implementation (they will use a repository, but will never care if it's in memory or in mongo).
That gives me the ability to create as many implementations as I want and the only thing I need to do to change the behavior of the system is instantiate the implementation that I want to use.
I want the same behavior but in Elixir (let's use the same example). Since it's not an Object Oriented language I can't use the above approach. Obviously I want it to be extensible (I could easily pass a String with the type of repository I want to use in each call and use pattern matching to determine what behavior to use, but that doesn't scale well because every time I'll want to add an implementation I will have to look in every piece of code I'm pattern matching the type and add the new implementation). What would be the best approach to achieve this?
Thanks in advance!
Suppose you have these two (or more) repository implementations, which implement the same interface:
defmodule MyApp.Repository.Memory do
def get(key) do
# ...
end
def put(key, value) do
# ...
end
end
defmodule MyApp.Repository.Disk do
def get(key) do
# ...
end
def put(key, value) do
# ...
end
end
Then you can write a general repository module that will just forward the function calls to one of the repository backends, based on a configuration value in your config/config.exs file:
defmodule MyApp.Repository do
#backend Application.get_env(:my_app, :repository_backend)
defdelegate [get(key), put(key, value)], to: #backend
end
The configuration can be made so that it is environment specific (just look at the default config.exs in a mix project freshly created with mix new my_app):
# config/config.exs
import_config "#{Mix.env}.exs"
# config/dev.exs
config :my_app, repository_backend: MyApp.Repository.Memory
# config/prod.exs
config :my_app, repository_backend: MyApp.Repository.Disk
Throughout your entire code, you can then just use the MyApp.Repository module without explicitly referencing one of the specific implementations:
MyApp.Repository.put(:foo, "Hello world!")
value = MyApp.Repository.get(:foo)

SBT run code in project after compile

we need to run some code after the compile step. Making things happen after the compile step seems easy:
compile in Compile <<= (compile in Compile) map{x=>
// post-compile work
doFoo()
x
}
but how do you run something in the freshly compiled code?
More info on the scenario: we are using less for css in a lift project. We wanted lift to compile less into css on the fly (if needed) to help dev, but produce less using the same code, during the build, before tests etc run. less-sbt may help but we are interested in how to solve this generally.
You can use the triggeredBy method like this:
yourTask <<= (fullClasspath in Runtime) map {classpath =>
val loader: ClassLoader = ClasspathUtilities.toLoader(classpath.map(_.data).map(_.getAbsoluteFile))
loader.loadClass("your.class.Here").newInstance()
} triggeredBy(compile in Compile)
This will instantiate your class that has just been compiled, using the runtime classpath for your application, after any compile.
It would probably help if you explained your use scenario for this, since there are some different possible solution paths here and choosing between them might involve considerations that you haven't told us.
You won't be able to just write down an ordinary method call into the compiled code. That would be impossible since at the time your build definition is compiled, sbt hasn't looked at your project code yet.
Warning: rambling and thinking out loud ahead.
One trick I can suggest is to access testLoader in Test to get a classloader in which your compiled classes are loaded, and then use reflection to call methods there. For example, in my own build I have:
val netlogoVersion = taskKey[String]("...")
netlogoVersion := {
(testLoader in Test).value
.loadClass("org.nlogo.api.Version")
.getMethod("version")
.invoke(null).asInstanceOf[String]
}
I'm not sure whether accessing testLoader in Test will actually work in your case because testLoader loads your test classes as well as your regular classes, so you might get a circular dependency between compile in Compile and compile in Test.
If you want to try to make a classloader that just has your regular classes loaded, well, hmm. You could look in the sbt source code at the implementation of createTestLoader and use it for inspiration, modifying the arguments that are passed to ClasspathUtilities.makeLoader. (You might also look at the similar code in Run.run0. It calls makeLoader as part of the implementation of the run task.)
A different path you might consider is to reuse the machinery behind the run task to run your code. You won't be able to call an arbitrary method in your compiled code this way, only a main method, but perhaps you can live with that, if you don't need a return value back.
The fullRunTask method exists for creating entire run-like tasks. See "How can I create a custom run task, in addition to run?" from http://www.scala-sbt.org/0.13.1/docs/faq.html . fullRunTask makes it very easy to create a separate task that runs something in your compiled code, but by itself it won't get you all the way to a solution because you need a way of attaching that task to the existing compile in Compile task. If you go this route, I'd suggest asking it that last piece as a separate question.
Consider bypassing fullRunTask and just assembling your own call to Run.run. They use the same machinery. In my own build, I currently use fullRunTask, but back before fullRunTask was added by sbt, here was what my equivalent Run.run-based code looked like:
(..., fullClasspath in Compile, runner, streams, ...) map {
(..., cp, runner, s, ...) =>
Run.run("name.of.my.MainClass",
cp.map(_.data), Seq(), s.log)(runner)
}
Pardon the sbt 0.12, pre-macro syntax; this would look nicer if redone with the 0.13 macros.
Anyway, hopefully something in this brain dump proves useful.

How do I change Closure Compiler compile options not exported to command line?

I found that some options in CompilerOption are not exported to the command line.
For example, alias all strings is available in the Closure Compiler's Java API CompilerOption but I have no idea how set this in the command line.
I know I can create a new java class, like:
Compiler c = new Compiler();
ComppilerOptions opt = new ComppilerOptions();
opt.setAliasAllString(true);
c.compile(.....);
However I have to handle the command line args myself.
Any simple idea?
============================
In order to try the alias all string option, I write a simple command line application based on compiler.jar.
However I found that, the result I got when open the alias all string is not what I expected.
For example:
a["prototype"]["say"]=function(){
var a="something string";
}
Given the above code, the something string will be replaced by a variable like this:
var xx="something string";
....
var a=xx;
....
This is fine, but how about the string "say"? How does the closure compiler know this should be aliased(replace it use variable) or exported(export this method)?
This is the compiled code now:
a.prototype.say=function(){....}
It seems that it export it.
While I want this:
var a="prototype",b="say",c="something string";
xx[a][b]=function(){.....}
In fact, this is the google_map-like compilation.
Is this possible?
Not all options are available from the command line - this includes aliasAllStrings. For some of them you have the following options:
Build a custom version of the compiler
Use the Java API (see example).
Use plovr
Getting the same level of compression and obfuscation as the Maps API requires code written specifically for the compiler. When properly written, you'll see property and namespace collapsing, prototype aliasing and a whole host of others. For an example of the style of code that will optimize that way, take a look at the Closure Library.
Modifying http://code.google.com/p/closure-compiler/source/browse/trunk/src/com/google/javascript/jscomp/CompilationLevel.java?r=706 is usually easy enough if you just want to play with something.
Plovr (a Closure build tool) provides an option called experimental-compiler-options, which is documented as follows:
The Closure Compiler contains many options that are only available programmatically in Java. Many of these options are experimental or not finalized, so they may not be a permanent part of the API. Nevertheless, many of them will be useful to you today, so plovr attempts to expose these the experimental-compiler-options option. Under the hood, it uses reflection in Java, so it is fairly hacky, but in practice, it is a convenient way to experiment with Closure Compiler options without writing Java code.

Resources