Pact-node dependencies are very large, any way to reduce the size? - pact

We have implemented contract testing using pact for our Angular JS frontends and java backends.
I've noticed that the node_modules/#pact-foundation directory is pretty huge (pact-node v 4.3.2)
du -sh node_modules/#pact-foundation/
741M node_modules/#pact-foundation/
The JS UIs are always only consumers but the dependencies seem to require the following
ls node_modules/#pact-foundation/
pact-mock-service pact-node pact-provider-verifier-linux-x64
pact-mock-service-linux-x64 pact-provider-verifier
Is there any way to pull in a smaller set of dependencies?
Edit - it seems the reason for this is as follows
du -sh pact-node/node_modules/#pact-foundation/pact-mock-service/build/*
1.9M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock_service-0.8.2
8.9M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-0.8.2-1-linux-x86_64.tar.gz
8.5M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-0.8.2-1-linux-x86.tar.gz
9.2M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-0.8.2-1-osx.tar.gz
12M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-0.8.2-1-win32.zip
50M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-darwin
48M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-linux-ia32
50M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-linux-x64
51M pact-node/node_modules/#pact-foundation/pact-mock-service/build/pact-mock-service-win32
pact-node depends on pact-mock-service & the bundled dependency includes the mock service for all OSes.
Edit 2 -
Changing my dependency to the following
"#pact-foundation/pact-node": "6.9.0",
and adding a resolution (I'm using yarn not npm)
"resolutions": {
"#pact-foundation/pact-node": "6.9.0"
}
Brings the total size of the dependencies down to
du -sh node_modules/#pact-foundation/*
1.7M node_modules/#pact-foundation/pact-node
170M node_modules/#pact-foundation/pact-standalone
Cheers
Shane

Sadly, no, not yet.
Currently, our main Pact application is written in Ruby and is packaged with Travelling Ruby, a way to package ruby apps for different OS/architectures. Now originally, the intention was to only download the OS/arch specific binary so you don't have to load everything, however, a bug in npm is causing issues with optional dependencies when a package-lock.json is committed into a repository. To work around this issue, we ended up having to package them all together, which I particularly don't like.
However, the good news is that we are working on this problem. We are currently trying to reimplement our Pact application using Rust, which will compile down to native binaries without all the extra stuff that came with Ruby, which will reduce the overall size of the binary drastically. It isn't finalized just yet, but it is still being worked on, so please be patient.
Thanks.

Related

DataFlow Job Startup Takes Too Long When triggered from Composer

I have a static pipeline with the following architecture:
main.py
setup.py
requirements.txt
module 1
__init__.py
functions.py
module 2
__init__.py
functions.py
dist
setup_tarball
The setup.py and requirements.txt contain the non-native PyPI and local functions which would be used by the Dataflow worker node. The dataflow options are written as follows:
import apache_beam as beam
from apache_beam.io import ReadFromText, WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from module2.functions import function_to_use
dataflow_options = ['--extra_package=./dist/setup_tarball','temp_location=<gcs_temp_location>', '--runner=DataflowRunner', '--region=us-central1', '--requirements_file=./requirements.txt]
So then the pipeline will run something like this:
options = PipelineOptions(dataflow_options)
p = beam.Pipeline(options=options)
transform = (p | ReadFromText(gcs_url) | beam.Map(function_to_use) | WriteToText(gcs_output_url))
Running this locally takes Dataflow around 6 minutes to complete, where most of the time goes to worker startup. I tried getting this code automated with Composer and re-arranged the architecture as follows: my main (dag) function in dags folder, the modules in plugins, and setup_tarball and requirements.txt in data folder... So the only parameters that really changed are:
'--extra_package=/home/airflow/gcs/data/setup_tarball'
'--requirements_file=/home/airflow/gcs/data/requirements.txt'
When I try running this modified code in Composer, it will work... but it'll take much, much longer... Once the worker starts up, it will take anywhere from 20-30 minutes before actually running the pipeline (which is only a few seconds).. This is much longer than triggering Dataflow from my local code, which was taking only 6 minutes to complete. I realize this question is very general, but since the code works, I don't think it's related to the Airflow task itself. Where would be a reasonable place to start looking at for troubleshooting this problem? At the Airflow level, what can be modified? How does Composer (Airflow) interact with Dataflow, and what can potentially cause this bottleneck?
It turns out that the problem was associated with Composer itself. The fix was to increase the capacity of Composer, i.e., increase vCPUs. Not sure why this would be the case, so if anyone has an idea for the foundation behind this issue, your input would be much appreciated!

All gms/firebase libraries must use the exact same version specification (mixing versions can lead to runtime crashes)

I have upgraded gms:play-services-analytics from 11.0.4 to 16.0.4
and firebase-messaging from 11.0.4 to 17.1.0 but lint is giving this error
All gms/firebase libraries must use the exact same version specification (mixing versions can lead to runtime crashes). Found versions 17.1.0, 16.2.0, 16.0.4, 16.0.3, 16.0.1, 16.0.0. Examples include `com.google.firebase:firebase-messaging:17.1.0` and `com.google.firebase:firebase-iid:16.2.0`
I gave a look into the External libraries i can see different versions of gms is being used here.
com.google.android.gms:play-services-ads-identifier-16.0.0
com.google.android.gms:play-services-analytics-16.0.4
com.google.android.gms:play-services-analytics-impl-16.0.4
com.google.android.gms:play-services-base-16.0.1
com.google.android.gms:play-services-basement-16.0.1
com.google.android.gms:play-services-measurement-base-16.0.3
Similary
com.google.firebase:firebase-common-16.0.0
com.google.firebase:firebase-iid-16.2.0
com.google.firebase:firebase-iid-interop-16.0.0
com.google.firebase:firebase-messaging-17.1.0
i have only added the following two dependencies
implementation 'com.google.android.gms:play-services-analytics:16.0.4'
implementation 'com.google.firebase:firebase-messaging:17.3.4'
root level build.gradle contains
classpath 'com.google.gms:google-services:4.0.1'
As mentioned in the blog
https://android-developers.googleblog.com/2018/05/announcing-new-sdk-versioning.html
All firebase/gms libraries can now have a different versioning and the libraries mentioned above are imported by android itself.
Why i am getting this error ?
For me, it was using a rather old build tools version. Updating to build tools 28.0.3 fixed the problem.

How do I compile LESS files every time I save a document?

I've installed Less via npm like this
$ npm install -g less
Now every time that I want to compile source files to .css, I run
$ lessc styles.less styles.css
Is there any way via the command line to make it listen when I save the document to compile it automatically?
The best solution out there I've found is the one recommended on the official LESS website: https://github.com/jgonera/autoless. It is dead simple to use. Also it listens to the changes in the imported files to compile.
Have a look at this article:
http://www.hongkiat.com/blog/less-auto-compile/
It offers GUI solutions (SimpLESS, WinLESS, LESS.app, and ChrunchApp) and a node solution. (deadsimple-less-watch-compiler)
Are you using less alone or with Node.JS ? Because if you are using it with node, there are easy ways to resolve this problem. The first two I can think of are (both these solutions go in your app.js) :
using a middleware, like stated in this stack overflow discussion
var lessMiddleware = require('less-middleware');
...
app.configure(function(){
//other configuration here...
app.use(lessMiddleware({
src : __dirname + "/public",
compress : true
}));
app.use(express.static(__dirname + '/public'));
});
another method consists of making a system call as soon as you start your nodeJS instance (the method name may differ based on your NodeJS version)
// before all the treatment is done
execSync("lessc /public/stylesheets/styles.less /public/stylesheets/styles.css");
var app = express();
app.use(...);
In both cases, Node will automatically convert the less files into css files. Note that with the second option, Node was to be relaunched for the conversion to happen, whereas the first option will answer your need better, by always checking for a newer version in a given directory.

"include_recipe" vs. Vagrantfile "chef.add_recipe". What's the difference?

Just ran nginx::source recipe on my vagrant box, and I have very unusual behaviour.
When I include a recipe from the Vagrantfile (as below), everything works like a charm,
chef.add_recipe("project::nginx")
chef.add_recipe("nginx::source")
(project::nginx recipe is very simple. Using it to override default attributes of the nginx cookbook)
but if I include a recipe at the very end of project::nginx (mentioned up), everything falls apart:
node.default['nginx']['server_names_hash_bucket_size'] = 128
include_recipe "nginx::source"
Until now I didn't know there's any difference in behaviour between those two invocations. Does anybody here knows what's the difference?
Gotya! Chef 11 feature. Issue with it exist in chef-solo solely :)
To make a quick resume, difference is:
chef.add_recipe() - loads entire cookbook context (all the files, e.g. recipes, definitions, attributes...)
include_recipe "" - files(attributes, definitions etc.) that are not in the expended run list are not loaded.
There are at least 4 ways to solve the issue(put files in the run list):
include_attribute - include desired attribute file explicitly.
metadata.rb->dependency - if your cookbook is using recipe from another cookbook, put that cookbook in metadata.rb's dependency section, and all it's files will be loaded.
chef.add_recipe() - Load recipe via Vagrantfile. (Mentioned here just for reference)
Berkshelf - you may use this cookbook manager to solve the issue as well. Here's the Stackoverflow thread about this exact problem and some Docs
For those who are interested in further reading, Chef 11 introduced dependency-based cookbook loading for non-recipe files. The new loading logic means that files belonging to cookbooks which exist in the cookbook_path but are not in the expanded run_list or dependencies of the cookbooks in the expanded run_list will no longer be loaded. REF: Opscode breaking changes documentation, and if you need a signature of the error I got, here's the exactly same one, even for the same cause.

Version information on Xserver modules

I am trying to find a tool that will extract the module version information (a part of the module record) fron an Xserver module. For example, in the Xorg logs I can see the following information for the librecord module in my Xorg.0.log file...
[ 39.892] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so
[ 39.905] (II) Module record: vendor="X.Org Foundation"
[ 39.905] compiled for 1.9.0, module version = 1.13.0
[ 39.905] Module class: X.Org Server Extension
[ 39.905] ABI class: X.Org Server Extension, version 4.0
Is there a tools that would allow me to easily extract the aforementioned information. Sometimes you can use modinfo on the module and that will have version information, but that does not always work. The only consistent way I know of now is to parse the xorg log file. Thanks.
Yes, there is and you can also try to write a small one.
http://gitorious.org/xdriverprobe
The problem is that xdriverprobe won't compile on newer servers since I didn't update it to the newest ABIs. Also, xdriverprobe is only used for video drivers, but it can be adapted to be used on other modules. The main source code file (xdriverprobe.c) has less than 500 lines, so you can easily learn by reading it.
It works in Ubuntu 11.10... ./xdriverprobe -o moduledata gives the information you want.
Look at its source code. It does:
dlopen() the module
find a symbol called modulenameModuleData (if your module is called modulename)
that symbol is a XF86ModuleData* See /usr/include/xorg/xf86Module.h
check its member named vers
Spend a few hours and you'll be able to write a very tiny code that does what you want.
More information: http://www.xfree86.org/current/DESIGN17.html#65 (very old document, but most of what's written there is still true today). If you're not happy with that document, you have to read the Xorg source code.
Happy hacking!

Resources