Greenbone Community Edition (GCE) does not give results - openvas

I have installed the Greenbone Community Edition (GCE) ISO mentioned at installation in a virtual box in a Mint Linux with a bridged Adapter over WIFI in a home network. The IP that the virtual box got was 192.168.1.111.
Via advanced task wizard I started a new scan and after some sec it gave me the results:
Actually it didn't give any results.
What am I doing wrong? Should I do something further?

The most common reason for this is that the target is not answering to an ICMP Echo Request which is the default method for deciding if a target is alive.
Please check the "Alive Test" setting of your Target definition (found via Configuration -> Targets) and try some of the other available methods like "TCP Service Ping" or even "Consider Alive".
One additional issue might originate from the initial sync of the NVT feed which could take up to one our or more. Without a fully synced feed (check the availability of the NVTs via SecInfo->NVTs) you also won't get any results.

Related

How to see what manufacturer owns a MAC address range/prefix

I am looking for a way to programmatically get the name of the vendor that owns a MAC address within a block/range that they purchased. Preferably by querying some API or database, language agnostic. Or if there is some other way that applications do it that I am unaware of.
For example, running nmap -sn 192.168.1.0/24 with root privileges yields
...
Nmap scan report for 192.168.1.111
Host is up (0.35s latency).
MAC Address: B8:27:EB:96:E0:0E (Raspberry Pi Foundation)
...
... and that tells me that the Raspberry Pi Foundation "owns" that MAC Address, within the prefix range that they own: B8:27:EB.
However, I am not sure how nmap knows this, nor how I could find this out myself. Parsing nmap output is not an ideal solution for me. Here's what I found from digging online:
This stackoverflow question references a site that appears to do this, however it appears to not have been updated since 2013, nor does it expose any API endpoints. Most notably, it does not have the newer block of MAC Addresses that the Raspberry Pi Foundation reserved for their newer models (under Raspberry Pi Team, or something along those lines).
I found that the IEEE handles these registrations through their site, however it appears to be for their customers and I could not find an exposed endpoint for their search function.
On that same IEEE page linked above, it looks like I can get a CSV file of their entire database. However that seems large, and would have to be actively kept up-to-date. Does nmap come with an updated database generated from those files locally?
If a public-facing API like I'm envisioning doesn't exist, I'll make one myself for fun. I'd first like to know if I'm thinking about this wrong and if there is an official, "canonical" way that I have not found. Any help would be appreciated, and thank you.
The maintainers of nmap keep a list of prefixes as part of the tool. You can see it here:
https://github.com/nmap/nmap/blob/master/nmap-mac-prefixes
They keep this up to date by periodically importing the public registry on this site:
https://regauth.standards.ieee.org/standards-ra-web/pub/view.html#registries
Note that those files are rate-limited so you should not be querying those csv files ad hoc as part of a software package; rather you should do what nmap does and keep an internal list that you synchronize periodically.
I'm not aware of a publicly available tool to query them as an API; however, creating one that works the same way that nmap does would be fairly trivial. nmap does not update that file more than once or twice a year which makes me suspect that the list doesn't significantly change often enough that keeping your own list would be too onerous (you could even download nmap's list every so often).

BluetoothLEAdvertisementPublisher not working on Win10

I have tried the sample code found at https://learn.microsoft.com/sv-se/samples/microsoft/windows-universal-samples/bluetoothadvertisement/. C#.
It uses the class BluetoothLEAdvertisementPublisher. I have built that on my machine and executed it. I start the advertisement in foreground (or background) and then start an app on my phone (LightBlue and BLE Scanner tried). My PC isn't seen (I do see other devices)
The same when I try to write similar code myself. I have also tried writing a GattServiceProvider implementing a number of services. When I publish that one, I do see the services on the phone, and can read and write to them. So yes, bluetooth IS enabled and working (to some extent) on my machine.
I have latest version of Win10, with Intel AC8265 (with latest Intel drivers). It supports Bluetooth 4.2.
So why can't I see the advertisement? There are no error messages at all and the callbacks reports the advertisement is started as it should.
I'm also a bit confused by the relation between the BluetoothLEAdvertisementPublisher and the GattServiceProvider. Both do "publish" and the GattServiceProvider seems to be able to announce itself. But there is no way to add CompanyId or ManufacturerData to it. Shouldn't both be used? And both working...
To be more specific, it looks like the GattServiceProvider does actual advertising, but only advertising the computer name, bt address and service guids. No CompanyId or ManufacturerData at all. Googling around I found some people claiming that Company Id isn't required in advertisement, and others saying it is (and has to be registered). If Microsoft is advertising without Company Id, then I guess it's allowed.
And I can kind of understand if I'm not allowed to advertise an Intel NUC as having a Company Id from a totally different company. But if that is the case, one would think that the BluetoothLEAdvertisementPublisher would give some error code when used on a Win10 machine?

Starting OpenStack instances programmatically

I am using OpenStack4J to interact with OpenStack. My goal at this point is simply to launch an instance. I can do this manually using my tenant: rosemend. And when I do this, I have a network called rosemond (Id: a9b097b3-af47-4222-b98e-f1b631f9ec45) that I select and make the instance part of.
However, using OpenStack4J, I am not able to make any progress. OpenStack4j requires a network port that I don't seem to be able to figure out how to set.
The call to set this network port would look like:
serverCreateBuilder.addNetworkPort("0a44eedc-8298-4544-87d7-094c7b34708e")
First I tried the Id of the rosemond network itself (a9b097b3-af47-4222-b98e-f1b631f9ec45). The error message in this case is:
Port id a9b097b3-af47-4222-b98e-f1b631f9ec45 could not be found.
Next, within OpenStack, when I click on the rosemond network, I see a list of 5 items called ports. I then tried using each of them resulting each in error message:
Port 0a44eedc-8298-4544-87d7-094c7b34708e is still in use.
And when I do not pass a network port at all, I get the error:
It is not allowed to create an interface on external network c6fb539b-2013-405c-903a-4700a00d954b
My question is what is the value I should use here?
I will recommend you to go with JClouds instead. In my opinion is easier to use and the documentation is better.
See my answer in Openstack cloud (identity service, nova service and swift service) vs Java application. There is some sample code in GitHub that you can check.
1) To create a vm with an existing port, port id is required.
2) The port you use to boot the vm instance must be in DOWN status (Detached). If the port is attached to an instance (active), Openstack will report conflict. For Openstack4j, it throws a ClientResponseException exception with a message: port xxxis still in use.
See https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail

Machine's uptime in OpenStack

I would like to know (and retrieve via REST API) the uptime of individual VMs running in OpenStack.
I was quite surprised that OpenStack web UI has a colon called "Uptime" but it actually show time since the VM was created. If i stop the VM, the UI shows Status=Shutoff, Power State=Shutdown, but the Uptime is still being incremented...
Is there a "real" uptime (I mean for a machine that is UP)?
Can I retrieve it somehow via the OpenStack's REST API?
I saw the comment at How can I get VM instance running time in openstack via python API? but the page with the extension mentioned there does not exists and it looks to me that this extension will not be available in all OpenStack environment. I would like to have some standard way to retrieve the uptime.
Thanks.
(Version Havana)
I haven't seen any documentation saying this is the reason, but the nova-scheduler doesn't differentiate between a running and powered off instance. So your cloud can't be over-allocated or leave an instance in a position that would be unable to be powered on. I would like to see a metric of actual system runtime as well, but at the moment the only way to gather that would be through ceilometer or via Rackspaces StackTach

Using snow (and snowfall) with AWS for parallel processing in R

In relation to my earlier similar SO question , I tried using snow/snowfall on AWS for parallel computing.
What I did was:
In the sfInit() function, I provided the public DNS to socketHosts parameter like so
sfInit(parallel=TRUE,socketHosts =list("ec2-00-00-00-000.compute-1.amazonaws.com"))
The error returned was Permission denied (publickey)
I then followed the instructions (I presume correctly!) on http://www.imbi.uni-freiburg.de/parallel/ in the 'Passwordless Secure Shell (SSH) login' section
I just cat the contents of the .pem file that I created on AWS into the ~/.ssh/authorized_keys of the AWS instance I want to connect to from my master AWS instance and for the master AWS instance as well
Is there anything I am missing out ?
I would be very grateful if users can share their experiences in the use of snow on AWS.
Thank you very much for your suggestions.
UPDATE:
I just wanted to update the solution I found to my specific problem:
I used StarCluster to setup my AWS cluster : StarCluster
Installed package snowfall on all the nodes of the cluster
From the master node issued the following commands
hostslist <- list("ec2-xxx-xx-xxx-xxx.compute-1.amazonaws.com","ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com")
sfInit(parallel=TRUE, cpus=2, type="SOCK",socketHosts=hostslist)
l <- sfLapply(1:2,function(x)system("ifconfig",intern=T))
lapply(l,function(x)x[2])
sfStop()
The ip information confirmed that the AWS nodes were being utilized
Looks not that bad but the pem file is wrong. But it is sometimes not that simple and many people have to fight with this issues. A lot of tips you can find in this post:
https://forums.aws.amazon.com/message.jspa?messageID=241341
Or check google for other posts.
From my experience most people have problems in these steps:
Can you log onto the machines via ssh? (ssh ec2-00-00-00-000.compute-1.amazonaws.com). Try to use the public DNS, not the public IP to connect.
You should check your "Security groups" in AWS if the 22 port is open for all machines!
If you plan to start more than 10 worker machines you should work on a MPI installation on your machines (much better performance!)
Markus from cloudnumbers.com :-)
I believe #Anatoliy is correct: you're using an X.509 certificate. For the precise steps to take to add the SSH keys, look at the "Types of credentials" section of the EC2 Starters Guide.
To upload your own SSH keys, take a look at this page from Alestic.
It is a little confusing at first, but you'll want to keep clear which are your access keys, your certificates, and your key pairs, which may appear in text files with DSA or RSA.

Resources