MDI supporting speed - networking

From Stackoverflow
got that MII supports 100Mbit/s speed.
What is supporting speed for MDI? Where can I find information about MDI vs MII, which is better among two?
Thanks

Related

Does NS-3 supports Mobile IPv6?

I have to simulate a few scenarios of network mobility (MIPv6). I was told to use NS-3, but I don't find any good information about it. Can anyone give me some lights about this? I just need to simulate a few simple scenarios of networks moving around...
Thanks anyway.
As far as I can tell, there are two ways to do ns-3 MIPv6 simulations:
using DCE: http://www.nsnam.org/~thehajime/ns-3-dce-umip/getting-started.html
using a native UMIP implementation from http://eudl.eu/pdf/10.4108/ICST.SIMUTOOLS2010.8682 but I do not believe that it has been released.

Radeon HD 4850 and OpenCL: will cl_khr_fp64 work on this videocard?

This videocard (Radeon HD 4850) conforms only with OpenCL 1.0, per AMD Compatibility table. I need some hardware to conduct intensive financial calculations with doubleN types (no floats at all!). According to this cardtable, this card is able to work with double types. Now I have the possibility to buy it at quite an attractive price.
I'd greatly appreciate if an answerer has real experience in working with this card for OpenCL with fp64 extension. Of course, if there are problems with this card, please put two lines here.
Thank you and sorry for my English.
I haven't used this card with DP before, but if the spec says it is supported, then it's worth a try.
In my opinion, you should go with a newer model card though. There are a lot of cheap cards out that will outperform the 4850, and they will support some new features as well.
This card supports double precision but the 4xxx series doesn't include local memory in the chip. As the standard mandates local memory support it is emulated with global memory and very slow. Many algorithms require local memory for obtaining a good speed-up. So, a newer card 5xxx and higher is a lot better.
In addition, some combinations of older cards/older SDK versions only support double precision through the cl_amd_fp64 extension (not the official cl_khr_fp64 extension) because of some small things from the standard that are not supported. For the most part, this doesn't matter much except that you need to change the extension name in your code to make it work with doubles.
As a general tip, I would try to avoid the 4xxx series if you intend to make serious GPGPU development. Keep in mind also, that the newer 7xxx series it is much more optimized for GPU computations than both the 5xxx and 6xxx series closing much of the gap with NVIDIA cards. So, if you can, try to aim for a 7xxx with double precision support.

C10K for the modern world

Answering this question made me wonder: is there nice modern version of the famous C10K page? 5 years passed from the time it was updated and I'm sure there're advances in the field.
Perhaps current version would be called something like C1M problem? ;)
In the modern world of things like distributed load-balancers and content distribution networks, I'm not sure that a single system having to be able to handle such a large number of concurrent connections is as big a deal. These days scalability is more about scaling out to distribute than scaling up to beef up your capabilities.
There's a million user comet application , though it investigates Erlang(+ OS tuning)

OpenCL vs. DirectCompute?

I'm looking for comparisons between OpenCL and DirectCompute, but I haven't found anything. OpenCL's advantages of being cross-platform and having a wider range of supported GPUs don't matter to me. I'm fine with coding on Windows against DX11 GPUs only. Assuming that, what are the pros and cons of each API?
I know this question was raised before, but I'm looking for more details.
I'm not interested in CUDA, since I don't want to restrict myself to only Nvidia hardware.
Probably the biggest difference for a coder is that DirectCompute is programmed by a language which is similar to HLSL, and OpenCL is programmed via a C-like language.
Another difference to consider is that, generally, for commodity level GPUs, the DirectX support is better (faster and less buggy) than OpenGL support on Windows. This may translate to more stable support for DirectCompute, but really, this is just speculation.
Well the major advantage of OpenCL is that it is not just limited to graphics cards. You can make use of your multicore CPU, Graphics Card and potentially any number of other hardware acceleration devices (DSPs etc) all from the same program.
I'm not sure if DirectCompute allows that freedom.
The OpenCL cross-platform-ness is not just a detail, as the host code (the one calling the OpenCL API and submitting kernels) can itself be cross-platform (see link text, link text...).
Write once, run on any GPGPU, anywhere.
Otherwise the OpenCL tooling is really getting better, with an ATI Stream plugin for Visual Studio, the NVidia & ATI SDKs that contains tons of samples, etc...
Another option now is C++ AMP which gives you modern C++ syntax without a need for a seperate compiler while still preserving hardware portability. Please follow links from here for more info and feel free to post questions as you have them: http://blogs.msdn.com/b/nativeconcurrency/archive/2011/09/13/c-amp-in-a-nutshell.aspx
I use OpenCL because i can easily port my App to Linux but with DirectCompute this is not possible.
I think also that the performance of the OpenCL implementation will increase with time (that it comes at the same Level like CUDA for NVidia Cards) and also that the (driver)bugs will (hopefully ;) ) be eliminated with time.

Scaling exercises to practice

I was trying to find online some exercises to practice scaling techniques (memchached, SQL Optimization, sharding dbs), but I could only find descriptions of these techniques, not any project on which to try them.
This link with slides on scaling techniques, is an interesting one, as it sums up some tools to achieve scalability quite well.
Is there a projecteuler kind of site for these kind of activities? Or at least some excercises (such as a downloadable ASP.NET/PHP site with obvious slowdowns, concurrency issues, subtle bugs) for people to try and learn how to fight this issue?
I find that the site High Scalability has some nice insights.
It might be interesting to hack at Wordpress. Their caching plugins take care of a lot of scaling issues but it would be cool to write your own plugin or hack at the source to cut down on SQL queries or to cache static pages. If you come up with something, make sure to let the rest of the community know!
George's slides are definitely a good basis to work from. Note that he is not talking about a specific technique or technology; rather he's discussing more general architectural and design decisions that will help your application scale as a whole.
I personally think this sort of high-level thinking would be much more valuable than individual optimisation techniques. Perhaps you could take a well known web application and hack it until it scales well across multiple machines? A cluster of lots of cheap, low-power EC2 machines could be really useful here. Getting an existing or new application to run properly across a number of machines would be a fantastic exercise.
Counter-intuitively, rather than getting as much as possible to run on a single machine, I'd say it would be much more educational to get the same application running on several machines.
Once you have that, it makes sense to move onto more specific improvements like a separate static content tier, memcached, DB sharding, batch operations and so on.
In terms of specific projects to work on, how about cloning Twitter, Flickr or The Pirate Bay. They've all had performance and scaling challenges in the past.

Resources