I am trying to configure kamailio with xcap. Unfortunately after installing kamailio with xcap and presence module some other dependent module (like pua and rls) is needed. How can i load these module without reinstalling kamailio again. One more thing even pua and rls directory is available but pua.so and rls.so files are missing,which is necessary to load them.
RLS is not needed for XCAP. Also, PUA is not a must, but can be needed if you want some presence extensions, however, independent of XCAP.
Rather old by now, but here is a tutorial with Kamailio and XCAP, not needing RLS or PUA:
http://kb.asipto.com/kamailio:presence:k31-made-simple
On the other hand, regarding installing new modules, you can install only one module at one time, for rls just do:
make install-modules modules=modules/rls
I assumed you installed from sources, otherwise the kamailio-presence-modules package (.deb or .rpm) should have all those modules related to presence and xcap.
Related
I'm working on R projects behind a proxy server, which is why I use the keyring library to store my proxy credentials and to authenticate on the proxy manually whenever it is required. This way, I don't need to write HTTPS_PROXY=http://usr:pw#proxy:port somewhere in plaintext - neither in global environments nor project wise. Of course, on runtime, Sys.env does contain this string but at least only for the session.
So far so good. Now I need to use virtual environments because of some package version mismatches in my projects. For that I created renv:init(). After closing and reopining the package, Rstudio seems to freeze during loading the package. I guess renv somehow tries to reach the packages (some are on cran, some are on local gitlab), which cannot work as the proxy is not set.
When I create a .Renviron including the proxy settings with my username and password, everything works fine.
Do you know a way to prevent renv to try to connect to the package sources at project start? Or do you think the problem lays somewhere else?
My best guess is that renv is trying to make a request to the active package repositories on startup, as part of its attempt to verify that the lockfile + library are in sync. If that's the case, you could disable this via:
RENV_CONFIG_SYNCHRONIZED_CHECK = FALSE
in your .Renviron. See https://rstudio.github.io/renv/reference/config.html for more details.
Alternatively, you could tell renv to load your credentials earlier on in a couple ways, e.g.
Try adding the initialization work to either your project .Rprofile or (with RENV_CONFIG_USER_PROFILE = TRUE) your user ~/.Rprofile;
Try adding the initialization code to a file located at renv/settings.R, which renv will source relatively early on during load.
We are evaluating the migration from our current client/server application to .NET Core. The 3.0 release added the support for WinForms we need for our client, but ClickOnce will not be supported.
Our solution is installed on-premise and we need to include settings (among others) like the address to the application server. We create dynamically ClickOnce packages that can be used to install and update the clients and include the settings. This is working like a charm today. The users install the client using the ClickOnce package and every time we update the software we regenerate these packages at the customer's site and they get automatically the new version with the right settings.
We are looking at MSIX as an alternative, but we have got a question:
- Is it possible to add some external settings files to the MSIX package that will be used (deployed) when installing?
The package for the software itself could be statically generated, but how could we distribute the settings to the clients on first install / update?
MSIX has support for modification packages. This is close to what you want, the customization is done with a separate package installed after you install the main MSIX package of your app.
It cannot be installed at the same time as your main app. The OS checks if the main app is installed when you try to install the modification package and it will reject its installation if the main is not found on the machine.
The modification package is a standalone package, installed in a separate location. Check the link I included, there is a screenshot of a PS window where you can see the install path for the main package and the modification are different.
At runtime (when the user launches the app) the OS knows these two packages are connected and merges their virtual files and registry system, so the app "believes" all the resources are in one package.
This means you can update the main app and the modification package separately, and deploy them as you wish.
And if we update the modification package itself (without touching the main), will it be re-installed to all the clients that used it?
How do you deploy the updates? Do you want to use an auto-updater tool over the internet? Or ar these users managed inside an internal company network and get all the app updates from tools like SCCM?
The modification packages were designed mostly for IT departments to use them, and this is what I understood you would need too.
A modification package is deployed via SCCM or other tools just like the main package, there are no differences.
For ISVs I believe optional packages are a better solution.
I just saw a package that, in order to run properly, asks you to put something in the public section of settings.json. That made me wonder if the rest of the information there (sometimes sensible, like AWS keys) is accessible as well.
So, should I be worried about this or does Meteor hides this information from packages?
Any package you install from any package manager including NPM, Ruby Gems, and the Meteor package server can run arbitrary code on your computer as your user, including using the fs module to read and write files, accessing the network to send and receive data, etc.
In fact, you place the same trust in the developer whenever you install an application from the internet - almost any application on your computer could read your settings.json file, for example Dropbox, Chrome, etc.
Therefore, there is no way to completely secure the settings.json file from package code. The only way to be sure that packages are safe is to use only community-approved packages or read the source code of the packages you are using.
Problem:
I would like to make julia available for our developers on our corporate network, which has no internet access at all (no proxy), due to sensitive data.
As far as I understand julia is designed to use github.
For instance julia> Pkg.init() tries to access:
git://github.com/JuliaLang/METADATA.jl
Example:
I solved this problem for R by creating a local CRAN repository (rsync) and setting up a local webserver.
I also solved this problem for python the same way by creating a local PyPi repository (bandersnatch) + webserver.
Question:
Is there a way to create a local repository for metadata and packages for julia?
Thank you in advance.
Roman
Yes, one of the benefits from using the Julia package manager is that you should be able to fork METADATA and host it anywhere you'd like (and keep a branch where you can actually check new packages before allowing your clients to update). You might be one of the first people to actually set up such a system, so expect that you will need to submit some issues (or better yet; pull requests) in order to get everything working smoothly.
See the extra arguments to Pkg.init() where you specify the METADATA repo URL.
If you want a simpler solution to manage I would also think about having a two tier setup where you install packages on one system (connected to the internet), and then copy the resulting ~/.julia directory to the restricted system. If the packages you use have binary dependencies, you might run into problems if you don't have similar systems on both sides, or if some of the dependencies is installed globally, but Pkg.build("Pkgname") might be helpful.
This is how I solved it (for now), using second suggestion by
ivarne.I use a two tier setup, two networks one connected to internet (office network), one air gapped network (development network).
System information: openSuSE-13.1 (both networks), julia-0.3.5 (both networks)
Tier one (office network)
installed julia on an NFS share, /sharename/local/julia.
soft linked /sharename/local/bin/julia to /sharename/local/julia/bin/julia
appended /sharename/local/bin/ to $PATH using a script in /etc/profile.d/scriptname.sh
created /etc/gitconfig on all office network machines: [url "https://"] insteadOf = git:// (to solve proxy server problems with github)
now every user on the office network can simply run # julia
Pkg.add("PackageName") is then used to install various packages.
The two networks are connected periodically (with certain security measures ssh, firewall, routing) for automated data exchange for a short period of time.
Tier two (development network)
installed julia on NFS share equal to tier one.
When the networks are connected I use a shell script with rsync -avz --delete to synchronize the .julia directory of tier one to tier two for every user.
Conclusion (so far):
It seems to work reasonably well.
As ivarne suggested there are problems if a package is installed AND something more than just file copying is done (compiled?) on tier one, the package wont run on tier two. But this can be resolved with Pkg.build("Pkgname").
PackageCompiler.jl seems like the best tool for using modern Julia (v1.8) on secure systems. The following approach requires a build server with the same architecture as the deployment server, something your institution probably already uses for developing containers, etc.
Build a sysimage with PackageCompiler's create_sysimage()
Upload the build (sysimage and depot) along with the Julia binaries to the secure system
Alias a script to julia, similar to the following example:
#!/bin/bash
set -Eeu -o pipefail
unset JULIA_LOAD_PATH
export JULIA_PROJECT=/Path/To/Project
export JULIA_DEPOT_PATH=/Path/To/Depot
export JULIA_PKG_OFFLINE=true
/Path/To/julia -J/Path/To/sysimage.so "$#"
I've been able to run a research pipeline on my institution's secure system, for which there is a public version of the approach.
We have a series of scripts that are used when building Apache. Specifically these scripts are used to:
patch the vanilla source code,
patch in customer specific functionality,
build and install the Apache server,
build customer specific modules, and
create a custom install tarball ready for eventual deployment.
N.B. The deployment of the tarballs and configs is handled by other independent mechanisms. This system creates the install tarball for deliver to Ops who then handle the deployment to the web farm.
We have versions of these patches and modules for:
different OS's and OS versions, and
different Apache releases.
N.B. The different Apache releases tend to exist only during testing and migration to a new release of Apache across the web farm.
I have now brought the patches and custom modules under CVS and currently have them tagged according to Apache release version.
Should I keep the tagging simple and use subdirectories to distinguish between Linux specific and Solaris specific elements? Or should I keep one directory structure and use more complex tagging to distinguish elements for the different OS's?
We are currently using CVS and are about to migrate to SVN.
For platform-specific versions, it usually helps to:
have template scripts (i.e. scripts with variables in them)
a deployment script which will copy the template and replace the variables with platform specific data (hostname, port number, paths, ...)
That is not always possible for all type of scripts, but if the differences are limited, that approach allows you to go with:
one simple tagging
one simple tag
one central place to manage platform-specific differences (the "deployment" script)