Does anybody know where to find Sample Solarwinds SWQLs to get Health Data for Routers and Switches? If anybody can post any samples, would be of great help.
It's plausible that the easiest way to retrieve router/switch health data may be by configuring OID/MIB imports using the Universal Device Poller (UnDP) tool available on your primary poller.
Simple google searches can get you to recent unified MIB listings, which you can collect from the target devices and display in node-related pages of your discretion and design.
It's admittedly tedious, but once they're in they're good until you replace the hardware, so long as you're willing to commit to SNMP polling of the device. Further the tool allows you to perform trial-and-error testing before committing the MIB import you're working on. Note for reference I'm referring to NPM11.5/12. Earlier versions should have this tool up to a point, but no promises.
Related
I want to fetch the data of a stock. Since the data changes very fast, is there any way to pull the data like 50-100 times a second from trading websites?
And can we implement that using a raspberry Pi 4 8gig model.
RasPi4 should be more than adequate for this task. Both the ethernet and WiFi hardware is capable of connections at these speeds. (Unless you’re running a bunch of other stuff on it.) Consider where your bottlenecks may be, likely ISP or other network traffic). Consider avoiding WiFi in favor of cat5e or cat6. Consider hanging this device off your router (edge) to keep lan traffic lower and consider QOS settings if you think this traffic may compete with other lan traffic.
This appears to be a general question with no specific platform in mind. For stocks, there are lots of platforms to choose from.
APIs for trading platforms often include a method to open a stream. Instead of a full TCP conversation for each price check, a stream tells the server to just keep on sending data. There are timeout mechanisms of course, but it is good to close that stream gracefully (It’s polite since you’re consuming server resources at a different scale. I’ve seen some financial APIs monitor and throttle stream subscribers who leave sessions open.).
For some APIs/languages you can find solid classes already built on GitHub. Although, if simply pulling and reading a stream then the platform API doc code snippets should be enough to get you going.
Be sure to find out what other overhead may be implicated. For example, if an account or API key is needed to open a stream then either a session must be opened first or the creds must be passed with the stream being opened. The API docs will say. If you’re new to this sort of thing, just be a detective and try to infer what is needed. API docs usually try to be precise and technically correct with the absolute minimum word count.
Simply checking the steam should be easy. Depending on how that steam can be handled by your code/script, it may be harder to perform logic on the stream while it is being updated. That’s usually a thread issue or a variable scope issue depending on the script/code. For what you’re doing I would consider Python or PowerShell depending on your skill-set and other design parameters.
I want to build a decentralized, reddit-like system using P2P. Basically, I want to retain the basic capabilities of reddit, but make it decentralized, to make it more robust and immune to censorship. This will also allow people to develop different clients to match the way they want to browse it.
Could you recommend good p2p libraries to base my work on? They should be open-source, cross-platform, robust and easy to use. I don't care much about the language, I can adapt.
Disclaimer: warning, self-promotion here !!!
Have you considered JXTA's latest release? It is probably sufficient for what you want to do. Else, we are working on a new P2P framework called Chaupal, but it is not operational yet.
EDIT
There is also what I call the quick-and-dirty UDP solution (which is not so dirty after all, I should call it minimal).
Just implement one server with a public address and start listening for UPD.
Peers located behind NATs contact the server which can read how their private IP address has been translated into a public IP address from the received datagrams.
You send that information back to the peer who can forward it to other peers. The server can also help exchanging this information between peers.
Then peers can communicate directly (one-to-one) by sending datagrams to these translated addresses.
Simple, easy to implement, but does not cover for lost datagrams, replays, out-of-order etc... (i.e., the typical stuff that TCP solves for you at the IP stack level).
I haven't had a chance to use it, but Telehash seems to have been made for this kind of application. Peer2Peer apps have a particular challenge dealing with the restrictions of firewalls... since Telehash is based on UDP, it's well suited for hole-punching through firewalls.
EDIT for static_rtti's comment:
If code velocity is a requirement libjingle has a lot of effort going into it, but is primarily geared towards XMPP. You can port off parts of the ICE code and at least get hole-punching. See the libjingle architecture overview for details about their implementation.
Check out CouchDB. It's a decentralized web app platform that uses an HTTP API. People have used it to create "CouchApps" which are decentralized CouchDB-based applications that can spread in a viral nature to other CouchDB servers. All you need to know to write CouchApps is Javascript and learn the CouchDB API. You can read this free online book to learn more: http://guide.couchdb.org
The secret sauce to CouchDB is a Master-to-Master replication protocol that lets information spread like a virus. When I attended the first CouchConf, they demonstrated how efficient this is by throwing a "Couch Party" (which is where you have a room full of people replicating to the person next to them simulating an ad hoc network).
Also, all the code that makes a CouchApp work is public by default in special entities known as Design Documents.
P.S. I've been thinking of doing a similar project, but I don't have a lot of time to devote to it at the moment. GOD SPEED MY BOY!
My C++ turn-based game server (which uses database) does not stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases).
Is there some tutorials/books/common standards which explain how to do it in a best way?
The way you put the database into the picture might be misleading: clustering solutions exist for all of the mostly used RDBMS, so that if you need to support your DB activities with more than one DB node you will just have to check the documentation from your DB vendor.
More complex scenarios are there when it comes to synchronize your non-DB application state that needs to be shared among several servers. There are already a number of questions here that tackle the same problem, like here or here
You might also be interested into some messaging system, I heard good things about ZeroMQ
Hope this helps.
Does anyone have experiance in a lot of these?
I'm not so intrested in the pdf creation part of LCDS.
Just for flex messaging which would give me the best performance? As far as I know LCDS and WebOrb both do real time streaming is that correct?
Basically the question is which gives quickest response and which will allow for most client connected to a single servlet container.
Thanks
Edit 1
This may be clearer what I want. I'm looking to server at least 5000 clients with sub second response times with push messages, I'm trying to figure out which is the most scalable option, I've been quoted several million push messages a day. Obviously we can throw more servers at the problem I'm not convinced thats the most maintainable option.
Its not media streaming I'm looking for, but more event updates. It must work without sticky sessions.
LiveCycleDS & WebOrb are the only ones providing messaging using sockets through RTMP protocol. Note that in this case the clients are not connected to a servlet container, but to a dedicated server included in the product distribution (bypassing the servlet mechanism).
There are more messaging servers on the market, Lightstreamer is one of them. Or Flash media server.
There are many more things to be taken into consideration when choosing a solution however (price, integration with various architectures (like DMZ) and frameworks, paid support, documentation, your relation with the sales representative etc).
I would like to know if there's an open source application that can:
-Being open-source (obviously free, no cost at all)
-Check which ports are being used and check the bandwith used by each of them.
-Based on requirements above create a weekly report. With details of each prt per day and time specifications.
I have read about Ethereal for the Network Monitoring and JasperReports for the Report-creation-stage, but haven't gone much on details yet..
If my specifications cannot be met with a free app then I would like to say that I could work with Java to check which ports are being used, but I still don't know if Java could handle ALL the requirements... please, I would really like to have an answer for that.. Because I could start working on it right now but I want to be sure Java can have everything covered.
PD: If Java can't be a solution what would you suggest?
suggestions for you:
Colasoft Capsa Free: http://www.colasoft.com
Spiceworks: new user, cannot give link.
Or google: free traffic monitor