Dynamically traversing MIB to find OIDs to extract information - networking

I am currently attempting to pull specific information from the MIBs of a couple of devices.
These will mainly be Cisco devices, I was wondering if there were any common OIDs for all devices I can query from, or would they need to be individually hard coded in a config file? Or maybe anyway I can dynamically search for these OIDs?
Correct me if I am wrong, but to my understanding, the MIB set for each device type is different and there are very few common elements within them, most of which will be manufacturer specific?
I am trying to retrieve things like
CPU usage
HDD free space
uptime
etc...

Yes, all devices implement different sets of MIBs. However, many devices will implement the same standard MIBs, so there may be common variables that you could poll from all of them, particularly if they're from a single vendor.
There are two fairly straighforward ways to find out the MIB set implemented by a device:
SNMP walk it and compare the output
Ask the vendors/documentation about which MIBs are implemented.
Some devices also store the MIB files on their file system, making it possible to fetch the canonical list from there, but that's not applicable in 100% of the cases.

Related

ESP32 Partition and Data Storage

I am trying to write firmware code for RFID device which will have config data storage as well as the temporary storage that maybe can be read and then if convenient be removed.
I am using Arduino IDE to program this on an ESP32 Wroom32. I have tried to understand how the storage actually works, finding various resources. One being datasheet of the same, that says that there could be 4 MB of program code storage possible, and that sounds fantastic, my question is if for example I take EEPROM library and save about 214 bytes to config which will rarely be touched, where is it exactly being stored? Is it simply in NVS? I can see that the default settings show me about 1310720 Bytes of storage and I know that I can utilise other partitions as well to store more in case I ever try to have more sketch storage than 1310720 Bytes.
My question is if I am trying to store data such as config and real time data, how much would I possibly be able to store? Is there a limit? Would it cause any kind of problems if I try to use the other such partitions to write the code? Will it be only NVS that is storing that data or can I utilise the other app0, app1, spiffs etc to store extra Bytes? A lot of the resources are confusing me, here are the data that I am referring to from online 1 and 2. Any idea would help me proceed very further.
P.S. I am aware that the EEPROM library has been deprecated and I shall use either Preferences or littlefs for better management but if I am aware correctly I can still utilise them, and without much issue that will work since there is still compatibility for that. I am also curious about using inbuilt SRAM of RTC with the RTC attribute RTC_DATA_ATTR, since I hope to also utilise deep sleep mode incorporated.
My question is if I am trying to store data such as config and real time data, how much would I possibly be able to store? Is there a limit?
It depends. First on the module; there is ESP32-WROOM with 4MB flash but you could also order different flash sizes.
Then the question is: how big is your application (code)? Obviously this needs to be saved on the flash as well, reducing the total usable amount for data storage (by the size of the application). Also there is a bootloader which needs some small space as well.
Next, ESP32 is using a partition scheme. One partition is reserved for the bootloader. The rest can be divided between one or more application partitions, NVS partitions, and possibly other utility partitions (i.e. OTAData).
If you are using the OTA functions, there will be at least 3 application partitions of equal size, further reducing the total usable amount for data storage.
So the absolute upper limit of what you can store using NVS functions is the size of your NVS partition. However since it's a key-value storage, you must take into account the size of the key, which can be considerably larger than the data you store (up to 12 times for a 12 character key and a uint8 value).
So there is no way to say exactly how much data you can put into the system without knowing exactly how you're going to use it. For example, you could store one very large "blob" value that could take "up to 97.6%" of the partition size. But you could not store 10 "blob" values of 1/10 (9.76%) the size since you must take into account the keys and some flash metadata used internally.
Would it cause any kind of problems if I try to use the other such partitions to write the code?
That depends on what these partitions are used for. If you override the partition table, or bootloader, or your application code, yes there will be problems. If there is "free space" then it won't be a problem, but then you should redefine this free space as NVS space. It's nice of Espressif to provide this NVS library, dont work around it, work with it.
Using Espressif's esptool you can create custom partition tables where you could minimize the size of the application partition to just barely fit your application, and maximize the NVS partition size. This way you will get the most storage out of your device without manually implementing a filesystem. If you are using OTA, you should leave some empty room in your application partition, in case your application code grows, as it usually does.
Will it be only NVS that is storing that data or can I utilise the other app0, app1, spiffs etc to store extra Bytes?
You absolutely can, but you will destroy whatever data is on that partition. And you will have lots of work to do, because you'll have to implement all of this yourself (basically roll your own flash driver).
If you don't need OTA, you dont need app0/app1 partitions at all.
Note that SPIFFS is also a way to store data, except it's not key-value but file-based. If you dont need it, remove that partition, and fill the space with your NVS partition.
On the other hand, SPIFFS is probably a better alternative if you are really tight on flash space, since you can omit the key and do your own referencing.

network-on-chip verilog code

I have written and simulated a Verilog code in ISE Project Navigator 2013. this is an RTL model that describes the network-on-chip routers, buffers and links.
which device is better for synthesis and implementation?
How can I get the static and dynamic power consumption, a packet transfer delay, area and the other factors that indicates network performance, using ISE Project Navigator?
The question is very open ended so I will try to provide as general an answer as possible.
Now you have said that you have the code for a NOC Router in ISE. This would imply that you or the designer has a rough idea of the frequency at which the internal logic/system has to operate. The maximum clock tree frequency of your target device and would then be one of the key parameters that you need to check. If your design is running at around 150-200 MHz and is appropriately pipelined (small muxes, not more than 2-3 levels of logic between pipelining stages), then almost any of the currently available device families from both Xilinx and Altera should be suitable.
The next important consideration is of external connectivity. Does your design need high-speed serial connectivity with an external device. If that is true, then you would need to select a device that has high-speed SERDES IPs in-built. That would then limit your choice of devices.
Another factor to consider is interface to external SDRAM or RLDRAM. If your design needs to interface with such external devices, then you need to pick a device that has support either through a softcore or a Megafunction (Altera) or a hard IP block.
Finally you need to look at your logic utilization. You want to chose a device that is just big enough to satisfy your requirements, unless your design is part of a bigger project and there are modules that would be designed later and would sit alongside your NOC. You would have to make a rough guess of the number of LEs/LUTs that your design would need and pick a device 50% bigger than that. You can then run a trial synthesis run and check if your estimates are okay. If they are, and your device is less than 50% utilized, you could go in to a smaller device as need be.
There are also a few other considerations such as number of IOs, presence of a PLL/Clock manager that may affect your choice of device

DICOM C-StoreSCP: How to know in advance number of images SCU will send?

I have DICOM C-StoreSCP application which receives DICOM images from my other C-StoreSCU application. My SCU always send one (and only one) and complete (all images from given study) study on one association. So SCP always know that all images received from SCU belong to single study. I know I can also check StudyIUID; but that is not my point of interest here.
I want to know total number of images in study that is being transferred. Using this data, I want to display status like "Received 3 of 10 images..." on screen. I can count images received (3 in this case) but how can I know total number of images in given study (10 in this case) that is being transferred?
Workaround:
On receiving first C-Store request on SCP, I should read the StudyIUID and establish new association with SCU (SCU should also support Q\R SCP capabilities in this case) for Q\R and get total count of images in study using C-Find.
Limitations: -
SCU should also support Q\R SCP features.
SCU should compulsorily send image count in C-Find response.
SCU should always send all images from only one study on one asociation.
I can easily overcome the limitations if I write SCU (with Q\R SCP capabilities) myself. But my SCP also receive images from third party SCUs those may not implement features necessary.
Please suggest if there is any DICOM compatible solution?
Is this possible using MPPS? I have not worked on MPPS part of DICOM yet.
Conclusion: -
Accepted answer (kritzel_sw) suggests very good solution (using MPPS) with only one drawback. MPPS is not mandatory service for each SCU. MPPS is applicable to only SCUs those actually acquire the image i.e. modalities. Even not all modalities support MPPS out of the box; they need unlock of feature with additional license cost and configurations. Also, there are lot of scenarios where modalities push instances to some intermediate workstation and the workstation further push it to SCP.
May be, I need to look into combination of DICOM + NON_DICOM wayout.
Good question, but no simple answer.
Expecting a Storage SCU to support the C-FIND-SCP as well is not going to work well in practice unless you are referring to archive servers / VNAs.
MPPS is not a bad idea. All attributes (Study, Series, SOP Instance UID) you need are mandatory, so it should be valid to rely on them. "Should" because I have seen vendors violating these constraints.
However, how can you be sure that the SCU has received the complete study? Maybe the study consists of CT and MR series, but the SCU sending the images to you only conforms to CT and rejects to receive MRs.
You might want to consider the Instance Availability Notification service which is another service class with which information about "who has got which image" can be made available to other systems. Actually this would exactly do what you need, because you know in advance for each AET ("device") which images are available there. But this service is not widely supported in practice.
Even if you really know which images are available on the system that is sending the study to you - how can you be sure that there is no user sitting in front of it who has just selected a sub-set of the study for sending.
Sorry, that I cannot provide a "real solution" to you but for the reasons I have mentioned above, I am not aware of any real-world system which supports the functionality (progress bar) you are describing.

Is it OK for a DirectShow filter to seek the filters upstream from itself?

Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk

How to store a huge hash table in RAM and share it between different applications?

The data contains information like billions of ID-scores pairs. To quickly access these paired information, I plan to use the hash-table container since its time complexity of search is O(1). Considering the the raw data is around 80G, I don't want to load the data into RAM every time when I need to run search application. What I want to do is to generate the hash-table once and then store it in RAM with persistence of filesystem lifetime (the expense of RAM is not a criteria), and search it with different applications.
Based on my limited understanding, I could use "Memory Mapped Files" (boost C++ libraries). But I have questions:
1) Is it possible to keep the hash-table data structure when write it to the mapped file?
2) How much time it will cost to map the existed file to RAM?
Any answers/comments/suggestions are most welcomed!
Thanks,
1) Yes. The file is just bytes, just like memory.
2) Creating the mapping will be effectively instantaneous. Node that you won't be able to map all of it contiguously at once except on a 64-bit OS. Of course, if the file cache can't hold the portion of the map you're using, it will have to be read from disk.
How big are IDs? How big are pairs? How much locality of reference do you have? (Are there heavily-used pair and lightly used pairs?) How often will you be searching for pairs that aren't present? Is the data read-mostly? There may be better ways to do it. I'd strongly suggest starting with a broader question to make sure you're not stuck on a sub-optimal path.

Resources