SIEM CEF format syntax - syslog

I am new to the SIEM system and currently stuck on a silly issue that I could not find an answer for online, so please help out.
So I am trying to create a CEF type entry. Is following extension is acceptable as per standards?
I have this -
cs3Label=infoMap
cs3=[{key1,key2},{key3,key4}]
My concern is that is
[{...,...} ,{...,...}]
allowed with the provided String extensions?

I use this for the CEF format
CEF:0|MuCompany|MyProduct|MyVersion|FileName %
dname=% dst=% dpt=%
prot=% src=% spt=% suser=%<=userName>
xAuthenticatedUser=% requestMethod=%
The %< > is replaced with actual data. Key/Value pairs are separated by = and a space between each set, after the CEF header

Related

May need either decrypt or lua conversion of some type?

I don't have VB or anything installed, as I have absolutely no clue how to properly code (I can read and understand super basic code) - but I have no clue about any functions/ methods etc.
I've got a Lua file that I want to decode/ be able to actually read this file. From what I've read here on stack overflow, I've gathered maybe it's not encrypted, rather in lua crypt or something? Any input would be much appreciated! Thanks!
Here's the code:
FXAP JÈŸ‘Ä#I«ILmOÿ‰ý­Ún_ô­0J8ôã*2-¿ã´*¶p‹ý
Ëí}ý_A>¢AaÁ`®E÷„êÝ6g¿¿£¹®îãöà€•ãûÛ'—À]M™)bñà?W·S ›(Vâ†É(þ“ñמQ&|#Å´c•HX²¡<d¿CÖÖ[d×A“¨ò“>'fÈîÍ ¿¥±v´ô2Ys2Ñ‚:b/¥˜à¹Ü¬Å H~ß 9SºM‘FCú®}Ñ:Šè|¹7|]òC§CcKX|¾,#
Ž¤6ÚÖ\ºÂZ†ãï¥~t,‘?±…÷ï/è’±Q_™œê'ôYÖenŠ`äCÑ#“IÅ_Kà€þÒ´ek:QýÚ‚ò­&ÿO±!ÁÓGâhÏĽD²ÓdŸñÎ^6D3òÚ„üD?ûÕá$‚eÒäúÞ5î72ä±dv¬]hHƒ
Y¤4 ÑÊ#!³-(icæ…*¢ƒ¿þÛLþÉ®±ß¶)îFe!S$ÜS|ƒ¹C¶­hl—Ã-í®Ì:+ôÆSD¦¨ÌÅfÌhÕnÉ:_cÔ·Âä"ÏpßÅ7vÅi¸€ß†Mf~÷IBgÕ½
#ÊEüÈÉÙ5¥´·6g†^.æ`Z/Ð[ÏcÁÊ8ô4y[›—A¨›ÿ0j’ºrèyÖ$ÇÿD7"7}è:g
|Ðò¢n¬m8-`{I²Èû«°6˜ê”×o<ñ9*FÔåeDˆ€Åûà]ªý·gÈÿ¥íªC­ÏâÜá0Ðf¯uÒ·’Æ1<L¹±M¤˜~ïÒ)ÑfQÃq\aň½3K ÐàÀ}ŒXÂœg°’¢d|¹ÛŠ"®£öƒ?È B¼4®½ÎmŸ—r¾)Ù’dçÒ
>L©NN*†q&NòbOñ«ªŸÅ÷S[;×úB ÉS!˼×Yö“í«ÚXÎ]óÜ”®V Véeú"ˆZåZÃE/5GïÊýUÉd–‚ /¼Rd—ƒÅ%Ñà_ÚuŽõ¨·çö}ˆ /y“Ùèø…°åñ ˜Žî$¬¸NfHþqó•¨=€¦}d¨.îÓ±"ÂnãR =8Fx<›ötuèu‘í*Ÿxa Õ½
ç2ÃÓ8¯ —û–7,Š´2ý’5êÒfRè×íX¼’ühA"µsEÕƒ†×¯!Œ˜rp²Gòä×`þ/ àÃ(%B˜UÀð´÷²©ÏëÇ
wY•ƒå™/×)ú·]qÇx›P
~d{#
¶ ¬À€r~…Ûíü£Vn}QÙäJÜ[³dJÁš8±ÌèÏb×ÿÆñ'*ƒŠ*K`›éw=Þ¢’®ÁK¿šÍ"¯×¹
HdÈQ­ãí–y}3ÂOäÞÃRÇ'uG& VY1
ôÚ<lk ®·Êï­Ý²´ÄEô+}ƒÞó„`Txs¨Ý©ãáOkÂCÇ.’á„oÚæ0‚ê¶iø̉æ9Ä9ºík]y¿ñI‚Õ=\{ñü•ñÎáÚ4Ù$íi—˜ÎEB‘Šªmha-9÷œû
Tried basic research, online encryption detection websites (None worked)

DateTimeParseException while trying to perform ZonedDateTime.parse

Using Java 8u222, I've been trying a silly operation and it incurs in an error that I'm not being able to fully understand. The line code:
ZonedDateTime.parse("2011-07-03T02:20:46+06:00[Asia/Qostanay]");
The error:
java.time.format.DateTimeParseException: Text '2011-07-03T02:20:46+06:00[Asia/Qostanay]' could not be parsed, unparsed text found at index 25
at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1952)
at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1851)
at java.time.ZonedDateTime.parse(ZonedDateTime.java:597)
at java.time.ZonedDateTime.parse(ZonedDateTime.java:582)
Using the same date (although the timezone could be incorrect, the intention is just testing here), I changed the square bracket's value and it works, I mean:
ZonedDateTime.parse("2011-07-03T02:20:46+06:00[Europe/Busingen]);
It works as expected, as well as other values such:
ZonedDateTime.parse("2011-07-03T02:20:46+06:00[Asia/Ulan_Bator]")
ZonedDateTime.parse("2011-07-03T02:20:46+06:00[SystemV/CST6CDT]")
I found some similar questions such as the one below, but not precisely the same usage that I'm trying / facing.
Error java.time.format.DateTimeParseException: could not be parsed, unparsed text found at index 10
Does someone have an understanding of Java Date API to help me out to grasp what I'm doing wrong here?
Thanks.
Asia/Qostanay is a zone which doesn't exist in the JDK8's list of timezones. It was added later.
If you don't care about the location of the timezone then just splice the [...] part of the string off the end before parsing. Knowing that the time is +06:00 is going to sufficient for almost all purposes.
Alternatively, upgrade to a more recent version of Java.

First token could not be read or is not the keyword 'FoamFile' in OpenFOAM

I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.

Tesseract use of number-dawg

I need to specify a numeric pattern. I already made training normally.
I created a config file that has the line
user_patterns_suffix user-patterns
and the file user-patterns contains my patterns, for example:
:\d\d\d\d\d\d\d.
:\d\d\d\d\d\d\d\d\d;
!\d\d\d\d\d\d\d\d}
then I launch tesseract with the config file over a tif, and it tells me "Error: failed to insert pattern " message, for the first two patterns. It ultimately acts as if no pattern has been issued.
I need to recognize only and ever that patterns, and tried to train a language with a number-dawg file, but then, when using tesseract command, I got a segmentation fault.
I used in the number-dawg file its conversion of the above patterns:
: .
: ;
! }
The questions, as the google documentation is not clear, and I do not speak english:
the patterns file, where have to be used? I suppose number-dawg has to be used during training, but I got seg fault so couldn't try with it, and user-patterns during recognition phase, when launching Tesseract, but didn't work. Where am I doing errors?
do I need a dictionary, also, when training with number-dawg? I have a digit and punctiation only set of possible characters, and all the possible numbers in the digits, a dictionary is not possible. If I need to use dictionaries, how could I do?
Thanks in advance for help, any hint would be very appreciated

How to import Geonames into SQLite?

I need to import the Geonames database (http://download.geonames.org/export/dump/) into SQLite (file is about a gigabyte in size, ±8,000,000 records, tab-delimited).
I'm using the built-in SQLite-possibilities of Mac OS X, accessed through terminal. All goes well, until record 381174 (tested with older file, the exact number varies slightly depending on the exact version of the Geonames database, as it is updated every few days), where the error "expected 19 columns of data but found 18" is displayed.
The exact line causing the problem is:
126704 Gora Kyumyurkey Gora Kyumyurkey Gora Kemyurkey,Gora
Kyamyar-Kup,Gora Kyumyurkey,Gora Këmyurkëy,Komur Qu",Komur
Qu',Komurkoy Dagi,Komūr Qū’,Komūr Qū”,Kummer Kid,Kömürköy Dağı,kumwr
qwʾ,كُمور
قوء 38.73335 48.24133 T MT AZ AZ 00 0 2471 Asia/Baku 2014-03-05
I've tested various countries separately, and the western countries all completely imported without a problem, causing me to believe the problem is somewhere in the exotic characters used in some entries. (I've put this line into a separate file and tested with several other database-programs, some did give an error, some imported without a problem).
How do I solve this error, or are there other ways to import the file?
Thanks for your help and let me know if you need more information.
Regarding the question title, a preliminary search resulted in
the GeoNames format description ("tab-delimited text in utf8 encoding")
https://download.geonames.org/export/dump/readme.txt
some libraries (untested):
Perl: https://github.com/mjradwin/geonames-sqlite (+ autocomplete demo JavaScript/PHP)
PHP: https://github.com/robotamer/geonames-to-sqlite
Python: https://github.com/commodo/geonames-dump-to-sqlite
GUI (mentioned by #charlest):
https://github.com/sqlitebrowser/sqlitebrowser/
The SQLite tools have import capability as well:
https://sqlite.org/cli.html#csv_import
It looks like a bi-directional text issue. "كُمور قوء" is expected to be at the end of the comma-separated alternate name list. However, on account of it being dextrosinistral (or RTL), it's displaying on the wrong side of the latitude and longitude values.
I don't have visibility of your import method, but it seems likely to me that that's why it thinks a column is missing.
I found the same problem using the script from the geonames forum here: http://forum.geonames.org/gforum/posts/list/32139.page
Despite adjusting the script to run on Mac OS X (Sierra 10.12.6) I was getting the same errors. But thanks to the script author since it helped me get the sqlite database file created.
After a little while I decided to use the sqlite DB Browser for SQLite (version 3.11.2) rather than continue with the script.
I had errors with this method as well and found that I had to set the "Quote character" setting in the import dialog to the blank state. Once that was done the import from the FULL allCountries.txt file ran to completion taking just under an hour on my MacBookPro (an old one but with SSD).
Although I have not dived in deeper I am assuming that the geonames text files must not be quote parsed in any way. Each line simply needs to be handled as tab delimited UTF-8 strings.
At the time of writing allCountries.txt is 1.5GB with 11,930,517 records. SQLite database file is just short of 3GB.
Hope that helps.
UPDATE 1:
Further investigation has revealed that it is indeed due to the embedded quotes in the geonames files, and looking here: https://sqlite.org/quirks.html#dblquote shows that SQLite has problems with quotes. Hence you need to be able to switch off quote parsing in SQLite.
Despite the 3.11.2 version of DB Browser being based on SQLite 3.27.2 which does not have the required mods to ignore the quotes, I can only assume it must be escaping the quotes when you set the "Quote character" to blank.

Resources