Creating OpenVAS scan config through CLI (OMP) - openvas

Is there any way around to create a scan configuration(scan config) for OpenVAS running in CentOS 7 by specifying the NVT-Families by means of the OMP command create_config ?
If so, please provide a detailed example.

I don't know about creating a scan config by specifiying only the NVT families but you can for ex download a config from the greenbone gui and then add
<create_config>YOUR CONFIG RESPONSE GOES HERE</create_config>
Then easily you can with omp
cat scan-config.xml | $CS_OMP -u $USER_NAME -w $USER_PASSWORD -i -X -
Ref: http://docs.greenbone.net/API/OMP/omp-7.0.html#command_create_config

This is the only major drawback to automation via OpenVas. You can create a scan config solely using OMP, but how you specify the NVT's to use in config is backwards. You can't specify simply the NVT's you would not like to use. You have to specify the NVT's you'd like to use minus the ones you don't.
i.e. You grab a list of all families, and within each family, you specify every NVT you want by grabbing a list of all nvts within that family. You then modify a cloned default config using repeated calls to modify_config for each family, including all the NVT's you wanted to use in that family.
It is super painful. Nearly every article on the web details either basic scanning, or gui usage which is useless. Programmatically specifying a custom scan config is what you want, and the OMP is not well suited to the task as it is today. If you can find the backend calls gsad (the greenbone UI) is using to directly modify cloned scan configs (i.e. unchecking an unwanted NVT) let me know. I have yet to look at the source code in detail. The problem with that implementation is that it probably circumvents OMP, and is not recommended as it may break with each release of OpenVas.
Goodluck and happy coding.

Related

How can I permanently set an environment variable using Autotools?

I'm adapting an existing program to use Autotools for its build, but the resulting process depends on an environment variable. Is there a way to permanently set this environment variable during the build or installation process?
The program is intended to be used by Unix users and I could try to concatenate an export command directly to the .bashrc file and warn the user in case it fails because most of them will actually just use Ubuntu to run it (it's a relatively simple program that targets students), but I'd like to know if there's a more portable way to do this.
That's what I wouldn't like to do:
export VAR=/my/totally/not/hardcoded/path >> $HOME/.bashrc
Sorry to come to this late, but all of the answers to date are shockingly ... incomplete.
Building and installing software are both core use cases for the Autotools, and the installation part can absolutely involve adding or modifying files that affect user environments. If the software is installed by a user with sufficient privilege, then such effects can absolutely be applied to all system users, though the details may vary a bit from system to system (and the Autotools can help with that, too!).
For example, on RedHat-family Linuxes such as RedHat Enterprise, Fedora, Oracle Linux, and various others, you can drop an appropriately named file in /etc/profile.d, and the commands in it will automatically be read and executed by every login shell. Setting environment variables for all users is one of the common uses of this feature. I'm uncertain about Debian-family Linuxes such as Ubuntu, but it is always possible to modify file /etc/profile instead to have the same effect, and you absolutely can write an Automake install hook to do that.
Or for an altogether different approach, you can always provide a wrapper script around your program that sets the needed environment variables (supposing that the point is other than to add a directory to the PATH so as to find the program in the first place). In that case, you can even install the main program in a location that is not ordinarily in the path, so that users don't accidentally run it directly. This mechanism has the advantage that the environment variables are scoped to a run of the program, not a whole login session, but the disadvantage that users cannot override them.
I guess, no.
Autotools are about building your program, not about environment setup for the program to run. That's what users/admins are supposed to do. (Well I can imagine doing this, but I really don't want to try to figure it out, because the idea itself seems broken to me)
If your program REALLY needs some environment variable during run-time, then you should patch your sources for your application to test if the variable exists, and set one to default desired value, if it doesn't. Another idea is to enforce usage of an obligatory command line switch to pass the value in.
It's not clear what this has to do with autotools (or any other build system). No build system, by itself, can arrange for an env var to be present when the program it builds is run at a later tiem.
One solution is for your program to have a hardcoded default value for the var which is used if the environment var isn't present when the program starts running. Another frequently used solution is to name your binary something like myprog.bin and install a shell script named myprog which sets up the environment before doing exec myprog.bin.
I'm adapting an existing program to use Autotools for its build, but the resulting process depends on an environment variable. Is there a way to permanently set this environment variable during the build or installation process?
You've not been very concrete about what the program is (e.g. is the program a daemon? A user program?) or the nature of the environment variable dependency (e.g. is it another program? A mount point? A URL? A DB connection string?). Being more specific might give a better answer for you.
Anyway, autotools is not likely to offer any feature to help: It's a build system. Depending on the nature of your environment variable dependency, you're likely going to need package management (if you package it) or system administration level setup.
Since you think your primary user base is on Ubuntu this help page might give you some ideas.

How to upgrade (merge) web.config with web deploy (msdeploy)?

I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?

opening and changing wp-config file with bash script in nano

I’ve created or rather copied from a number of sources the necessary pieces in order to create my own bash script that will install WP via ssh. I’m not a coder at all, so many things will go over my head. But my patience and desire to learn will compensate.
I’ve created a bash script to download wordpress, create a new database (with correct permissions) then open the wp-config-sample.php file, rename it to wp-config.php file then open it up using nano. This is where I’m stuck. I’d like to use the details from the newly created database and have those details automatically inserted into the correct places within the wp-config file. Then I’d like to visit this url (https://api.wordpress.org/secret-key/1.1/salt/) and take those salt keys and also save them in the same wp-config.php file then save and quit.
Please help!
I don't think nano is going to be scriptable in the manner you need.
You need to look into Templates. The general idea is to put placeholders like %DB_NAME%, %DB_USER%, inside a template file like wp-config-template.php. Then you can use something like sed to search and replace these placeholders with real values, to generate the final wp-config.php.
However doing this in sed will get messy pretty quickly. You are better of using a scripting language like perl or python when you start doing templating. They will do the heavy lifting for you.
Finally, not to discourage your efforts. But building a server like you are is tricky, because there are number of things that could go wrong. Also you need to be idempotent, ie:- the ability to rerun your entire set of scripts, and only applying changes. So if you decide to add/remove packages the rest of the system should remain untouched.
If you are looking for a good guideline containing some the best tools to do this, you should check out the Genesis-WordPress project. It uses Ansible for provisioning and Capistrano for deployment.
Good Luck!
If you want a disgusting, but functional example on how to change wp-config.php from environmental variables in a bash script, check out the "official" WordPress scripts for creating a Docker image.

Best rsync syntax for transfering a wordpress site

I've found several rsync commands for moving my wordpress site from one local machine to a remote server. I've successfully used the following command suggested by another Stackoverflow user:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
Would you say that it's enough, or would you suggest other attributes?
It's my understanding that an SSH connection would be safer. How does this command change if I want to make it over SSH. Also are there other things to be done besides including the SSH command ( like generating/installing the keys etc etc ). I just starting so a detailed explanation for a noob would be very much appreciated.
Pakal
There are thousands of ways in which you can customize this powerful command. Don't worry about SSH, by default its using SSH. Rest of the options depend on your requirement. You can consider '--perms' to preserve permissions. Similarly '-t' preserves times. I don't know if its relevant in transfer of a site.
Also '-n' would show you a dry run of transfer scenario. '-B' switch allows you to override custom block size.
Probably you should have a look at options yourself and find the appropriate ones by running 'info rsync'.
the above command will use ssh and i see no problems with it's general usage.

Alternative tools for Amazon EC2?

Amazon's official tools for interacting with EC2 are kind of clunky and a pain to deal with. I have to set up a bunch of environment variables, store separate private keys just for EC2, add extra items to my PATH, and so on. They all output tab delimited lines that are hundreds of characters long with no headings, so it's a bit of a pain to interpret them. Their instructions for setting up an SSH keypair give you one that isn't protected by a passphrase, rather than letting you use an existing keypair that you already have. The programs are all just a bit clunky and aren't very good Unix programs.
So, are there any easier to use command line tools for accessing EC2? I know there is ElasticFox, and there is their web based console, which do make the process easier, but I'm wondering if anyone else has written better command line tools for interacting with EC2.
I'm a bit late but I have a solution!
I found the same problems with the Amazon AMI tools. They're a decent reference implementation but very difficult to use particularly when you have more than a couple instances. I wrote a replacement command-line tool as part of another project, called Rudy that answers most of your concerns
The commands are more intuitive than Amazon's AMI tools:
rudy-ec2 instances -C
rudy-ec2 groups -A -p 8080 -a 11.22.33.44 group-name
rudy-ec2 volumes -C -s 100
rudy-ec2 images
...
All configuration is in a single file (~/.rudy/config).
It can output in several formats (yaml, json, csv, tsv, and of course regular text):
rudy-ec2 -f yaml snapshots
---
:awsid: snap-2457b24d
:progress: 100%
:created: "2009-05-08T15:24:17.000Z"
:volid: vol-4ee10427
:status: completed
Regarding the private keys, There are no EC2 tools that allow to create private keys for with a password for booting a public instance because the API doesn't support it. However, if you create your own image, you can use your private keys.
Here's more info:
GitHub Project
An introduction to rudy-ec2
ElasticFox is handy for most tasks. They are occasions though that a command line tool will be better suited. I personally use boto library for python. It is very easy to script all the required operations. You can also use it to upload/download files from S3. In general, I would say that a scripting language like Python or RUby, together with a AWS library, is the best solution.
I personally use Tim Kay's Perl command line tools and haven't used original Java based API for quite some time. Excellent for UNIX environment.
Not command line, but take a look at what a free RightScale account will give you - much, much easier than command line or ElasticFox IMO.
About ec2-api-tools:
I agree that they are a bit too clunky, I particular dislike the output of ec2-describe-instances.
I recently switched to python-boto which offers a very clean and easy to use interface to ec2.
About not being able to specify a passphrase for the ssh key generated by EC2:
That's not the case. You can change the passphrase of any ssh private key anytime, using:
ssh-keygen -p -f /path/to/keyfile
E.g.
ssh-keygen -p -f ~/.ssh/id_rsa
About uploading your own ssh key pair:
You can use ec2-import-keypair, like this:
for i in $(ec2-describe-regions|cut -f 2);do
ec2-import-keypair --region $i mykey --public-key-file ~/.ssh/id_rsa.pub
done
The example above will upload the public key in ~/.ssh/id_rsa.pub to every region under the name "mykey". Remember that each region has it's own keypair.
In order to have the key installed in your ec2 instances, you'll have to pass the "-k mykey" option to ec2-run-instances.
Incidentally, uploading your own keypair is the only way to login with the same key to all the instances in all regions. If you create the keypair from the web interface, you'll have a different key in every region.
I have an open source graphical system admin tool called EC2Dream that replaces the command line tool. It installs on windows, linux and Mac OS clients and is written in Ruby and FXRuby. See www.ec2dream.com.
Neill Turner
www.ec2dream.com
If you use windows, try the tool linked below (part of the O2 Platform) which gives you an easy way to start and stop Amazon EC2 images (and if you need to extend the tool it you can easily add new features (since it just an C# script that is dynamically compiled and executed)
O2 Tool – Amazon EC2 Browser
Amazon EC2 Browser – Timer to Stop Instances
the problem with alternative libraries is that they are not always kept up to date, so if new features for AWS are released, then you need to wait. You posted that your main problems are the bunch of environment variables, add extra items to your PATH, etc. We had this
issue at BitNami, and is the main reason we created BitNami Cloud Tools that ships all of the AWS command line tools together with preconfigured Java and Ruby language runtimes. You only have to download it and everything that you need will be installed in a folder without modify your system configuration. We keep it regularly up to date.
There is an entire industry called Cloud Management which try to solve this type of problems. Scalr and RightScale and the leaders in this sector (disclaimer: I work at Scalr).
Cloud management softwares are built on top of Amazon EC2 API (and usually on other public IaaS like Rackspace) and provide an improved user interface along with automation tools like backups or SSH management as you mentioned it. They don't provide easier command lines tools stricto sensu. Their goal is to make interaction with Amazon EC2 easier.
Different options are available in the market:
Scalr: Scalr is available as a hosted service with a trial version.
Otherwise you can download and install the source code yourself, as it is released under the Apache 2 license.
RightScale: while they are usually considered as expensive for small businesses, they do offer a free account.
enStratus: they offer a freemium model like RightScale.

Resources