I am currently working in computer networking area, more specifically working in developing real time application for ad hoc network.Recently I have worked in machine learning and I love it.I want to know is there any way I could work with both (Computer network and machine learning).I have tried a lot but did not find yet.
Machine learning is tricky, most people just think of it as using library functions. One needs to develop differentiation across ML algorithms (not only use cases). So make sure to join university course and some hands on at problem solving not program writing.
Related
Has anyone done a comparison between R connect Server and Power BI. We are trying to work on the benefits of R Connect server over Power BI in order to convince our super strict IT management to go with R Connect Server.
Thank you.
You should figure out what decision variables are important to you. This RStudio thread goes into detail about the benefits, mostly if you are going lightweight it is better. Most likely your users are more technical and want more ability to build powerful tools themselves.
Power BI seems to be better for the Excel "power" users. It does not handle large datasets well, and most likely this is for a non-technical crowd.
Consider the end users before all else, then work backward from there.
I use some kinds of network mounts (like Samba Windows shares, sshfs, scp) on different networks (LAN, Dialup). Whenever it comes to transferring a large amount of small files, I see poor performance. Far away from what would theoretically be possible. No resource appears really busy, so it seems to be a question about the software behind that (this is why I'm hopefully not OT here).
What is the problem from a software developer perspective behind that? Why do those tools not saturate any component of my system or the network?
Is that just because the Linux kernel makes some stuff complicated, or is there more to know about?
I would like to develop a Network Inventory application that works on any operating system.
Reports on every possible resource attacehd to a network.
Reports all pertinent details of hardware and software.
Thats (and i hate to use the phrase) my "End Game".
However I am running before i can crawl here.
I have no experience of this type of development, e.g. discovering a computers hardware and software settings.
I've spent almost two weeks googling and come up short! :-(.
So I am turning to you to ask these questions:-
My first step is to find an existing open source project i can incorporate into my own code that extracts the fine grained details i am after, e.g. EVERYTHING there is to know about the hardaware and software on a single machine.
Does this project exist? or do i have to develop that first?
Have i got to write all this in C?
I am guessing getting this information about a computer is going to be easier than for printers, scanners, routers etc... e.g. everything else you would find attached to a network.
Once i have access to a single computers details i then need to investigate how i can traverse an entire newtork of printers, scanners, routers, load balancers, switches, firewalls, workstations, servers, storeage devices, laptops, monitors, the list goes on and on
One problem i have is i dont have a 1000 machine newtork to play on!
Is there any such resource available on theinternet? (is that a silly question?)
Anywho, if you dont ask you wont find out!
One aspect iam really looking forward to finding out how to travers the entire network,
should i be using TCP/IP for this?
Whats a good site, blog, usergorup, book for TCP/IP development?
How do i go about getting through firewalls?
How many questions can i ask in one go? :-)
My previous question on this topic ended up with PYTHON being championed as the language/script to go with to develop this application in.
Having looked at a few PYTHON examples they all seemed to be related to WINDOWS networks
and interrogating Windows Management Instrumentation (WMI). I had the feeling you cant rely on whats in WMI, and even if you can that s no good for UNIX netwrks.
Surely there exist common code for extracting hardware and software details from a computer? Why cant i find it on the internet?
Pease help?
Theres no prizes though :-(
Thanks in advance
I would like to appologise if i have broken forum rules or not tried hard enough on my own before asking for assistance.
I just would like to start moving forward with this as its one of the best projects i have been involved with.
I am inspired by the many differnt number of challenges involved and that if i manage to produce a useful application at the end of it it would hopefully be extremely helpful to many people.
That sit
Thanks in advance
DD
as a software vendor of a discovery solution, I can just say: Respect, that you want to start a new one :-). Just in case you are interested in what it could look like: http://www.jdisc.com
Now to some of our experience:
Programming Language:
I wouldn't write it in C. Use Java or .NET. Those languages have great advantages when it comes to tracking down errors or problems. For instance, in Java (and I guess also in .NET), you can see the stack trace when something is failing. For some pieces of code (e.g. WMI access), you might need to use C++ or C (e.g. access to native APIs from Microsoft). Use a native interface or a COM bridge from Java. In .NET, it should even be easier to access the Windows APIs).
Devices:
well, network printers, router, and switches are actually easier to discover. They usually expose their information via SNMP. SNMP is pretty easy to use and pretty robust. Getting information from Windows (or even Unix) systems is a bit trickier. Protocols can be blocked, misconfigured, messed up... We had cases, where WMI was simply hanging when requesting data from a remote device.
Test Devices:
Since we are also a smaller company, we also do not have 1000 different devices to test with. But, there are some things that might help:
a) For SNMP devices use a SNMP simulator. We use MIMIC 9.0 from Gambit Solutions and we are pretty happy with it. You can import SNMP walks from network devices and simulate the device as if it would be in your network.
b) Secondly, use virtualization whenever possible. With VMware, you can install Windows, Linux, or even Solaris. We also use a project called GNS3 to emulate Cisco Routers, Firewalls or Juniper routers.
c)You can test the rest of the devices only, if you have a customer that helps you with testing and implementing new devices.
This are just some ideas to start with. But I have to tell you, that it is not trivial and it takes a lot of time....
Hope that you got some ideas to start with...
I don't know that it's open source, but we use Spiceworks (http://www.spiceworks.com) here as an IT management platform. You may get some use out of exploring that.
Apologies for this huge question.....please bear with me and try to help :)
Previous employers have all had in house hosting or people other than me to deal with that side of stuff and all my personal projects (ie low traffic) have been comfortably handled by servergrid.com who allow any number of domains even in their basic package.
I am about to take on more serious projects and have little clue about hosting, the questions to ask and what to look for. Some basic research has been done but I am honestly confused by the number of metrics involved when main thing i care about is SPEED & SCALING.
I have noticed that servergrid db servers for instance shares many 100s DB users/server so I imagine a shared package where your paying just 2$/month for sql server, tho a bargain, is not going to scale beyond a hobby site.
So:
is moving to a dedicated or virtual dedicated server the simple answer to speed and the only real metric I need to worry about?
dedicated pricing is a big jump on servergrid - are there premium shared services that don't put a bazillion people on the server - it doesn't seem obvious from the sites, would it make a huge difference?
the landscape seems to changing in a big way - IIS7 and Server 2008 seem to have all these
features like Isolated Application Pools/ Hyper V, are these just BS hype or things that seriously help with scaling and speed?
Lastly cloud hosting (specifically http://www.rackspacecloud.com) - it runs .NET right, is it fundamentally architecturally different to anything else or just use of the word cloud for marketing? It looks v cool - but is it just normal hosting with a different billing model and a somewhat easier way to scale? Is this similar to the much hyped squarespace hosted blog/site system?
Sorry for my rambling style of question and would be deeply grateful for someone who can just in relatively plain english sweep away some of my basic misconceptions....
Thanks!
Okay, take a look at Amazon Web Services. They are very flexible in terms of infrastructure (both hardware and software) and I find their rates to be ok. Also, their business model revolves around "using" not "leasing" (ie you pay based on what you use, for how long, etc).
I think it's a good starting point.
Since your main concern is "speed" & "scale" you may also take a look at Windows Azure and SQL Azure
Windows Azure
A nice brief video explanation by Steve Marx.
What is Windows Azure
I would stay away from shared hosting for a "more serious" production deployment. Amazon's AWS is as good a place to start as any (rackspace has a similar service which now supports self-provisioning). Failing that, you might carefully evaluate how much scale you really need. If you know how many users you'll have and have any idea what their usage patterns will be, then get dedicated hosting to fit. If the number of your users is unknown and unpredictable, and their usage will be spiky, then go with AWS.
That would be my first-pass approach. YMMV, and it will take time to fine-tune your own approach.
I am developing applications for 9 years now - meanly Java. Now am asked to participate in the SVT team for the next release. Overall this means installing complex system setups and running specific user scenarios on these setups as well as doing long runs and load runs.
Overall I am positive about it as I will learn something new. But I am also affraid to loose some grip and knowledge with programming, because of not doing it a lot then.
I know doing programming in side projects such as helping with open source projects will be one alternative, but finding the time on top of a familiy life and a fulltime jop is not that easy.
What do you think, is doing concrete testing work helping getting a better software engineer?
Thanks in advance,
Michael
Testing isn't asside of programming.
You can still program automated systems so you can have recursion testing. From unit tests to real complex automated systems, the best i know is selenium which generates code you can use to build testing scripts in most languages.
There are other tools for non webapps. But I personaly believe that testing is a bit far away from "stoping coding. Unless you're just doing user point-of-view testing.
You can also do error injections which will make you write small singletons to inject them in the memory of your application.
So you can code while testing ;) and learn new stuff also.
Having been in a testing team i think it really helps, because you'll learn to exploit code easily, which will reflect when you build your own API or App at a later date.
I would say it depends on your skill and temparament. Programming knowledge will serve you well while testing. At the same time, I know that it needs a different approach and mindset and is on a completely different career track. You can always keep up your programming skills by writing code for a project you like (even if you have to make one up).