I'm looking forward to create a network monitor by looking into the contents of /proc/net folder. It's my specific requirement that I can't do any packet sniffing or anything like that. All my source is /proc/net.For example I can get all the active TCP connection details from /proc/net/tcp etc.
The contents of these files keep on changing, so I want to read these files continuously but also I need to read only when it's contents changes, that is if there is no network connected the file contents won't change and I don't' need to read them.
I looked into inotify but it does not detect the changes in /proc/net/ files.
inotifywatch /proc/net/
Continuous polling I guess will be ineffective. So looking for a suggestion..
Thanks in advance..
Did you check the gio libraries? You can add a watch to an open file, and specify on which events you get a callback.
https://developer.gnome.org/glib/stable/glib-IO-Channels.html
Also, this might be of interest to you (it seems newer versions would have this patch already included):
https://gitorious.org/gnome-essentials/glib/commit/68f9255ec6434b25339cfd6055013e898730d0e7
https://www.google.com.ar/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0CCEQFjAB&url=https%3A%2F%2Fmail.gnome.org%2Farchives%2Fcommits-list%2F2011-September%2Fmsg13539.html&ei=02qMVJ7cHse1sQTm2oKwDg&usg=AFQjCNEJpurm10iskmcHVkc81oAv8_7MLQ&sig2=e48bXfZxW_BwvNCjdpsfSw&bvm=bv.81828268,d.cWc
Related
Was wondering if this was possible at all. I am currently facing a situation where I have a legacy NG system with a number of sources integrated. We are working to slowly transition off of this particular instance, but to do this I need to make sure that the messages that are emitted retain their source host.
Unfortunately, when this legacy environment was crafted, the keep_hostname option was set to false, which meant that the engineers focused on source log specific HOST extraction.
As I am working to tee the data off to the new system, I need the data to retain its source hostname. Ideally one would just flip keep_hostname to yes, but there is too much risk right now as it could impact how data is being parsed throughout the system.
My ask is, while keep_hostname is globally disabled, is there a way that I can enable it within a filter or destination?
Doing this with a globally disabled keep-hostname() option is not really possible as the original host information is lost.
When keep-hostname(no) is used, a feature called "store-raw-message" can be enabled which stores the entire incoming message in the $RAWMSG macro. The raw message definitely contains the original hostname, but then it is up to you to extract the host from the whole message.
My ask is, while keep_hostname is globally disabled, is there a way that I can enable it within a filter or destination?
You can achieve something similar the other way around:
Setting keep-hostname() to yes keeps the original host name intact, but you will have 2 different macros:
$HOST contains the original hostname
$HOST_FROM contains the "source's" hostname
Applying a rewrite rule which overrides $HOST with $HOST_FROM can be added to paths where you want to retain the old behavior.
I am writing a client and server as separate applications. There are some global strings that both should have access to in order to ensure proper communication between the two. What is the typical method to provide such strings to both applications?
I imagine one possible method would be to place the strings in a header file and distribute that file with both applications. Is there anything in Qt that I can use to obtain an OS-agnostic location to place this header file so both applications will know where to look for it?
I'm looking for a solution that benefits from existing Qt libraries, but any generic approach would work as well. I'm not even sure a "library" is necessary, but so far Qt has helped my applications be OS-agnostic and I don't want to break from that paradigm.
Update to add clarity: These global strings would be static and constant - meaning they wouldn't change during runtime, so shared memory isn't needed. I just don't want to have a header file in the client and a header file in the server and then have to always make sure that their contents were exactly the same.
EDIT: QSettings should do the trick, too, like #Amartel suggested.
It is persistent and is easily access. Just make sure that your programs perform readSettings and writeSettings when necessary, if you want to updated the settings on the fly. I like the INI format that it has.
http://qt-project.org/doc/qt-4.8/mainwindows-application.html
http://qt-project.org/doc/qt-4.8/qsettings.html#details
http://qt-project.org/doc/qt-4.8/tools-settingseditor.html
QSharedMemory could also work.
http://qt-project.org/doc/qt-4.8/qsharedmemory.html
Example:
http://qt-project.org/doc/qt-4.8/ipc-sharedmemory.html
You can also achieve this with a QLocalServer.
http://qt-project.org/doc/qt-4.8/qlocalserver.html
http://qt-project.org/doc/qt-4.8/qlocalsocket.html
Example:
http://qt-project.org/doc/qt-4.8/ipc-localfortuneclient.html
http://qt-project.org/doc/qt-4.8/ipc-localfortuneserver.html
Hope that helps.
Qt gives a lot of ways of sharing data between processes, but most of them require these processes to share a machine. Since you are implementing a client-server paradigm, I assume your applications would interact with each other, and could be located at different computers. In this case the most common way is to establish a TCP connection and transfer data through TCP socket. Connection settings could be placed in a configuration file (for example QSettings with parameter QSettings::IniFormat).
I'm running a script by pasting it in a console like this:
bin/client2 debug
... my script ...
The script normalizes titles of the files. Since there are more than 20k files it takes really too much to time. So I need users still can use the site but in a read-only fashion.
But I assume that setting read-only true in zeo.conf would not let me run my normalization script. Wouldn't it?
How can I solve this?
Best regards,
Manuel.
There isn't, I'm afraid.
If your users alter the site when logged in, disable logging in for them until you are done.
Generally, for tasks like these, I run the changes in batches, to minimize conflicts and allow end-users to continue to use the site as normal. Break your work up in chunks, and commit after every n items processed.
You can add another zeo client that is not RO--it's not required that a zeoserver be RO to have the clients RO.
So, all the clients that are being used, make RO, and then add an additional RW client that isn't used by anyone but your script and then leave the zeoserver RW.
I’m a complete newbie at BizTalk and I need to create a BizTalk 2006 application which broadcasts messages in a specific way. I’m not asking for a complete solution, but for advise and guidelines, which capabilities of BizTalk I should use.
There’s a message source, for simplicity, say, a directory where the user adds files to publish them. There are several subscribers, each having a directory to receive published files. The number of subscribers can vary in the course of exploitation of the program. There are also some rules which determine if a particular subscriber needs to receive a particular file, based on the filename. For example, each subscriber has a pattern or mask of filename which files they receives must match. Those rules (for example, patterns) can change in time as well.
I don’t know how to do this. Create a set of send ports at runtime, each for each destination? Is it possible? Use one port changing its binding? Would it work correctly with concurrent sendings? Are there other ways?
EDIT
I realized my question may be to obscure and general to prefer one answer over another to accept. So I just upvoted them.
You could look at using dynamic send ports to achieve this - if your subscribers are truly dynamic. This introduces a bit of complexity since you'll need to use an orchestration to configure the send port's properties based on your rules.
If you can, try and remove the complexity. If you know that you don't need to be truly dynamic when adding subscribers (i.e. a subscriber and it's rules can be configured one time only) and you have a manageable number of subscribers then I would suggest configuring each subscriber using it's own send port and use a filter to create subscriptions based on message context properties. The beauty of this approach is that you don't need to create and deploy an orchestration and this becomes a highly performant and scalable solution.
If the changes to the destination are going to be frequent, you are right in seeking a more dynamic solution. One nice solution is using dynamic send ports and the Business Rules Engine. You create rule set for the messages you are receving. This could be based on a destination property or customer ID in the message. Using these facts, the rules engine can return a bunch of information like file mask, server name, ip address of deleiver server, etc. You can thenuse this information to configure the dynamic send in the orchestration. The real nice thing here is that you can update the rule set in the rules engine without redeploying the whole solution. As a newb, these are some advanced concepts, but not as diificult as you may think.
For a simpler solution, you might want to look at setting the FILE Send adapters properties via it's Propery Schema (ie. File name, Directory, etc.). You could pull these values from a database with a helper class inside an expresison shape. On each message ogig out, use the property shcema to set where the message will be sent and named. This way, you just update the database as things change.
Good Luck!
I have a dynamically generated rss feed that is about 150M in size (don't ask)
The problem is that it keeps crapping out sporadically and there is no way to monitor it without downloading the entire feed to get a 200 status. Pingdom times out on it and returns a 'down' error.
So my question is, how do I check that this thing is up and running
What type of web server, and server side coding platform are you using (if any)? Is any of the content coming from a backend system/database to the web tier?
Are you sure the problem is not with the client code accessing the file? Most clients have timeouts and downloading large files over the internet can be a problem depending on how the server behaves. That is why file download utilities track progress and download in chunks.
It is also possible that other load on the web server or the number of users is impacting server. If you have little memory available and certain servers then it may not be able to server that size of file to many users. You should review how the server is sending the file and make sure it is chunking it up.
I would recommend that you do a HEAD request to check that the URL is accessible and that the server is responding at minimum. The next step might be to setup your download test inside or very close to the data center hosting the file to monitor further. This may reduce cost and is going to reduce interference.
Found an online tool that does what I needed
http://wasitup.com uses head requests so it doesn't time out waiting to download the whole 150MB file.
Thanks for the help BrianLy!
Looks like pingdom does not support the head request. I've put in a feature request, but who knows.
I hacked this capability into mon for now (mon is a nice compromise between paying someone else to monitor and doing everything yourself). I have switched entirely to https so I modified the https monitor to do it. The did it the dead-simple way: copied the https.monitor file, called it https.head.monitor. In the new monitor file I changed the line that says (you might also want to update the function name and the place where that's called):
get_https to head_https
Now in mon.cf you can call a head request:
monitor https.head.monitor -u /path/to/file