Issue's with getting Cisco ASA to be ingested to the CommonSecurityLog data table - syslog

thank you for your time and help on this. I am having relentless issue's with getting Cisco ASA ingested to the CommonSecurityLog data table. I think it's stems from how I'm receiving the messages via syslog and my understanding of the architecture of the omsagent and how it differentiates between CEF and Syslog. Currently, we don't have anything writing to any syslog facilities. I am writing my cisco asa messages to a custom file that is generated everyday. It it sending on TCP/1470 because cisco asa does not support TCP/514. The logs are flowing to the machine successfully, so I don't have a conf syntax issue. Although I can't seem to find anything helpful to get this in to Sentinel now that it is sitting on my syslog server outside of creating a custom log that won't have field mapping. Below is what my syslog-ng.conf looks like for the related source. I also ran the validate connectivity script within the data connector page to make sure everything was okay with the agent connecting to the workspace.
source s_cisco {
tcp(port(1470));
};
destination d_cisco_asa { file("/opt/syslog-ng/cisco_asa/$HOST/$YEAR-$MONTH-$DAY-$SOURCEIP-cisco_asa.log");};
filter f_cisco_asa {
host(x.x.x.x);
};
log { source(s_cisco); filter(f_cisco_asa); destination(d_cisco_asa); };

To collect logs by Linux agent we need send them to the agent over port 25226
#syslog config
destination security_oms { udp("127.0.0.1" port(25226)); };
and then create security events configuration file
#oms config
#/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/
<source>
type syslog
port 25226
bind 127.0.0.1
protocol_type tcp
tag oms.security
format /(?<time>(?:\w+ +){2,3}(?:\d+:){2}\d+|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[\w\-\:\+]{3,12}):?\s*(?:(?<host>[^: ]+) ?:?)?\s*(?<ident>.*CEF.+?(?=0\|)|%ASA[0-9\-]{8,10})\s*:?(?<message>0\|.*|.*)/
<parse>
message_format auto
</parse>
</source>
<filter oms.security.**>
type filter_syslog_security
</filter>
More about that you could find on oms github page OMS security events configuration

Related

Is it possible to restrict ForceBindIP to only inbound/outbound traffic?

I'm using ForcebindIP to point an app at a specific network adapter, like this:
forcebindip -i 192.168.0.5 MyCSharpApp.exe
This works fine and the app isn't aware (or doesn't access) any of the other network adapters on the PC.
Is it possible to restrict ForceBindIP to outbound traffic only leaving the app to receive data from any local network adapter? Or even to specify a network adapter for outbound and another for inbound traffic?
I can't find an extra startup parameter for ForceBindIP that does this.
I'd appreciate any help with this.
If I get your problem correctly, you want to bind your application to listen for packets on all available interfaces but return packets to only through one given interface. I also assume it's a server application and you don't have neiter source code nor control over its behaviour.
Disclosure: I do not know how ForceBindIP works internally, I'm basing my understanding of it on this passage from the website:
it will then inject a DLL (BindIP.dll) which loads WS2_32.DLL into memory and intercepts the bind(), connect(), sendto(), WSAConnect() and WSASendTo() functions, redirecting them to code in the DLL which verifies which interface they will be bound to and if not the one specified, (re)binds the socket
Problems to overcome
I don't believe your desired configuration is possible with just one application level DLL injector. I'll list a few issues that ForceBindIP will have to overcome to make it work:
to listen to a socket, application has to bind() it to a unique protocol-address-port combination first. An application can bind itself to either a specific address or a wildcard (i.e. listen on all interfaces). Apparently, one can bind to wildcard and specific address simultaneously as outlined in this SO question. This however will be two different sockets from the application standpoint. Therefore your application will have to know how to handle this sort of traffic.
When accepting client connection, accept() will create a new socket and parameters on that are managed by Windows, I don't believe there's an API to intercept binding here - by this time the connection is considered established.
Now imagine, we somehow got a magic socket. We can receive packets on one interface and send to another. The client (and all routing equipment on the way) will have to be aware that two packets originating from two different source IP addresses are actually part of the same connection and be able to assemble the TCP session (or correctly merge UDP streams).
You can have multiple gefault gateways with different priorities and rules (which is a whole different topic to explore) but as far as I'm aware that's not going to solve your particular issue: majority of routing protocols assume links are symmetric and expect packets to keep within same interface. There are special cases like asymmetric routing and network interface teaming but they have to be implemented on per-interface level.
One potential solution
One way to achieve what you're after (I don't know enough about your environment to claim it will work), will be to create a virtual interface, set it into yet another IP network, bind your application to it, then use firewall (to, say, allow multicast backets into the "virtual" network) and routing from that network to required default gateway with metric set to 1. I also suspect just any Windows will not be that flexible, so you might need like a Server Edition.
I am sorry this didn't turn out to be the ready-to-fly solution, I however am hoping this gives you more context to the problem you are facing and points you into other directions to explore.
You can use Set-NetAdapterAdvancedProperty command in Powershell to set the flow control of your specified adapter
To get the names and properties of all the network adapter :-
Get-NetAdapterAdvancedProperty -Name "*"
Suppose you want the network adapter named "Ethernet 2" to be only used to receive data from internet then type :-
Set-NetAdapterAdvancedProperty -Name "Ethernet 2" -DisplayName "Flow Control" -DisplayValue "Rx Enabled"
You can find more in :
https://learn.microsoft.com/en-us/powershell/module/netadapter/set-netadapteradvancedproperty?view=win10-ps
Microsoft winsock example has a usage in their example for limiting a socket to only send or receive mode. It might help.
https://learn.microsoft.com/en-us/windows/win32/winsock/complete-client-code
Outbount and Inbount limits are not imposed while binding. But latter or when connection is established.
Line of code pertaining to this in client code is toward the end.
// shutdown the connection since no more data will be sent
iResult = shutdown(ConnectSocket, SD_SEND);

DICOM fail to use c-move : Move Request Failed: 0006:0317 Peer aborted Association (or never connected)

I am running the following command with the find-SCU tool from the OFFIS DICOM toolkit (dcmtk):
movescu -k 0010,0020="PAT004" ip_adress 104 -aec serverAET -aet myAET --study -ll debug -od data
And I keep getting the error.
The association seem to have worked well but the actual c-move seems to fail at the moment of the transfer
message of the error
The screenshot tells that you can successfully establish the connection, but the server aborts after receiving the request.
You missed to specify the mandatory key QueryRetrieveLevel (0008,0052).
add
-k 0008,0052="PATIENT"
to your command, and it should work.
However, moving means, that the server (MOVE-SCP) is prompted to transfer the images matched by the request to a destination application entity. This must be specified by providing the AET of that system:
-aem <AET of the destination>
This frequently fails due to one of these reasons:
the move destination AE title is resolved to an IP-address and port. This is achieved through the C-MOVE-SCP's configuration.
A Storage SCP has to listen for images transferred in the scope of the C-MOVE, its IP, AET and port have to match the MOVE-SCP's configuration for the Move destination AE title.

TCP > COM1 for receiving messages and displaying on POS display pole

I currently have a Java Applet running on my web page that communicates to a display pole via COM1. However since the Java update I can no longer run self-signed Java Applets and I figure it would just be easier to send an AJAX request back to the server and have the server send a response to a TCP port on the computer...the computer would need a TCP > COM virtual adapter. How do I install a virtual adapter to go from a TCP port to COM1?
I've looked into com0com and that is just confusing as hell to me, and I don't see how to connect any ports to COM1. I've tried tcp2com but it doesn't seem to install the service in Windows 7 x64. I've tried com2tcp and the interface seems like it WOULD work (I haven't tested), but I don't want an app running on the desktop...it needs to be a service that runs in the background.
So to summarize how it would work:
Web page on comp1 sends AJAX request to server
Server sends text response to comp1 on port 999
comp1 has virtual COM port listening on port 999, sends data to COM1
pole displays data
EDIT: I'm using Win 7 x64 and tcp2com doesn't work as a service. I tried using srvany but I get an error stating that the application started then stopped. If I use powershell and pass the tcp2com as an argument, it doesn't quit but it also doesn't run. So I nixed the whole 'service' deal and put the command: powershell -windowstyle hidden "tcp2com --test tcp/999 com1" and it works...sort of. The characters that get sent are all effed. I can write "echo WTF > COM1" on another computer which has COM2TCP (different vendor) and it'll come up as a single block on the POS display pole. However if I use COM2TCP on both the server and client machines, everything works fine...but that's only a trial version and it costs several hundred dollars! On another note, is there a way to send the raw text over IP without having to use another Virtual COM > IP adapter on another computer? Sort of like how curl works but different...?
After somewhat of an exhaustive search, I came across a program called 'piracom'. It's a very simple app that lets you specify port settings for the express purpose of connecting a serial port to an listening port over the network. So this is IP > Serial. For Serial > IP I used HW-VSP3-Single as even on the piracom website it said it's compatible! I've tested and it works!
I just put a shortcut to piracom in the startup folder of my user account; the app runs off of a .ini that it updates every time you make a change...so if you run the server and hide it, on the next reboot of the pc it'll start up running and hidden with all prior settings. Easy.
Now it's a matter of installing HW-VSP3 on the server and making a method on the Rails app which will write to the virtual COM port. The only issue I can see right now is that writing echo \14Test This! > COM3 actually prints the \14...if I do that in my Java applet, it sends the "go to beginning" signal.
Addendum 1: The \14 problem was fixed by using the serialport gem for RoR. I created a method in a controller that returned head :no_content and then send data to the COM port. Calls to this method were made via jQuery's $.Ajax, using "HEAD" HTTP method. Apparently though I had to add the GET verb in Rails routes because the HEAD option isn't supported for some gimpy reason.
Addendum 2: Some garbage data was being sent to the display pole at the end of the string...turns out I needed to turn off the "NVT" option in HW-VSP3. Also keep in mind that firewalls need to be modified to allow communication.

HTTP Error: 400 when sending msmq message over http

I am developing a solution which will utilize msmq to transmit data between two machines. Due to the seperation of said machines, we need to use HTTP transport for the messages.
In my test environment I am using a Windows 7 x64 development machine, which is attempting to send messages using a homebrew app to any of several test machines I have control over.
All machines are either windows server 2003 or server 2008 with msmq and msmq http support installed.
For any test destination, I can use the following queue path name with success:
FORMATNAME:DIRECT=TCP:[machine_name_or_ip]\private$\test_queue
But for any test destination, the following always fails
FORMATNAME:DIRECT=$/test_queue
I have used all permutations of machine names/ips available. I have created mappings using the method described at this blog post. All result in the same HTTP Error: 400.
The following is the code used to send messages:
MessageQueue mq = new MessageQueue(queuepath);
System.Messaging.Message msg = new System.Messaging.Message
{
Priority = MessagePriority.Normal,
Formatter = new XmlMessageFormatter(),
Label = "test"
};
msg.Body = txtMessageBody.Text;
msg.UseDeadLetterQueue = true;
msg.UseJournalQueue = true;
msg.AcknowledgeType = AcknowledgeTypes.FullReachQueue | AcknowledgeTypes.FullReceive;
msg.AdministrationQueue = new MessageQueue(#".\private$\Ack");
if (SendTransactional)
mq.Send(msg, MessageQueueTransactionType.Single);
else
mq.Send(msg);
Additional Information: in the IIS logs on the destination machines I can see each message I send being recorded as a POST with a status code of 200.
I am open to any suggestions.
The problem can be caused by the IP address of the destination server having been NAT'ed through a Firewall.
In this case the IIS server receives the message okay and passes it on to MSMQ. MSMQ then reads the message and sees the destination of the message as being something different than the known IP addresses of the server. At this point MSMQ rejects the message and IIS returns a HTTP status 400.
Fortunately the solution is fairly straightforward. Look in %windir%\System32\msmq\mapping. This folder can contain a bunch of xml files (often sample files are provided) that each contain mappings between one address and another. The name of the file can be anything you like, here is an example of the xml formatted contents:
<redirections xmlns="msmq-queue-redirections.xml">
<redirection>
<from>http://external_host/msmq/external_queue</from>
<to>http://internal_host/msmq/internal_queue</to>
</redirection>
</redirections>
The MSMQ service then needs restarting to pickup the new configuration, for instance from the command line:
net stop msmq
net start msmq
References:
http://blogs.msdn.com/b/johnbreakwell/archive/2008/01/29/unable-to-send-msmq-3-0-http-messages.aspx
http://msdn.microsoft.com/en-us/library/ms701477(v=vs.85).aspx
Maybe you have to encode the $ as %24.

Sandbox violation on second socket send

I have a Flex client using a Flash binary (TCP) socket for communication with a Java server. I have a localhost (Apache) server providing a crossdomain.xml file which is wide open just while I am testing.
My code successfully loads the policy file on startup.
I then connect the socket to the server without any difficulty and send a message and get a response. All good so far.
However, when I send a second message through the same socket I get a pause of about 12 seconds then a sandbox violation error:
Security Error: Error #2048: Security sandbox violation: file:///C:/apache_root/ttt1/ttt1.swf cannot load data from localhost:45455.
This is the same port and socket through which the first message succeeded.
I tried re-loading the policy file before every send, but I get the same result.
Any idea why this might be happening? I clearly have an open socket at one point. I am flushing the socket after each send and I tried doing that after each read as well, but the same result.
Thanks in advance
EDIT:
If I recreate the socket prior to every call my code works. I am struggling to believe that this is correct, but maybe there is a Socket setting I am missing.
As far as I know if you're doing binary sockets the crossdomain.xml is not loaded via http.
Have you checked your apache's access logs if the crossdomain is even queried?
You might get connection from flash via tcp from flash asking for the file on your java server (not using http. It just sends the string "" or similar). Look out for them. If you don't answer them within 3 seconds (or so) flash throws an sandbox violation.
The first thing you have to do when you want to make a socket connection is to load the policy file. This only has to be done once per load of the SWF.
Security.allowDomain(host);
Security.loadPolicyFile("xmlsocket://"+host+":"+port);
The request will be made on the assigned port(45455 in your case) your server will have to listen on that port for a request "<policy-file-request/>" without the quotes.
When that request is found then you need to return to the client the crossdomain.xml
with node <allow-access-from domain="*" to-ports="*" />
After the cross domain is sent you need to close the socket on the server side
On the client side you need to ignore the domain response as Flex will handle that however at that time you can reconnect to the socket server.
At this time you can do your data send/receive.
I have a feeling the reason it actually worked for you is because you were using the connection for the policy file to transmit your data before it timed out.
I would suggest reading up on the new style of crossdomain policies and also reading up on the protocol you are using for your socket server
I think it depends on the sandbox-policy you used in the compilation process of your swf not on your crossdomain.xml... maybe this docu helps you:Security sandboxes
But I'm not 100% sure
This sort of sounds like a cache problem. Perhaps you're pulling the first socket connection out of cache and the second one gets rejected because it's getting a 200 from the server.
You might want to add localhost to your flash security exceptions list for debugging. that will quiet the sandbox errors until you get your piece to it's production environment.

Resources