There are two questions:
1) What is the difference between cluster and Grid
2) What is the Cloud
I am not looking for conceptual definitions,
I found a lot of that by googling but the problem is I still do not get it.
so I believe the answer I seek is different. From what I could re-search online I start to think that
many article writers who is trying to explain this either do not understand this deep enough themselves
or not able to explain their knowledge for an average guy like myself (which is common issue with very technical people).
Just to let you know my level: I am a computer programmer, .NET and LAMP, I can do basic admin on both
Linux flavors and Windows, I have hands on experience with Hyper-V and now researching Xen and XCP
to setup a test cloud based on two computers for learning purposes.
Below info you do not have to read, it is just my current understanding of cluster,grid and cloud it
just to support my two questions because I thought it would help to understand
what kind of mess is in my head right now and what answers I am looking for.
Thank you.
Two computers used for reference in my statements are "A" and "B"
specs for A: 2 core intel cpu, 8GB memory , 500gb disk
specs for "B": 2 core intel cpu, 8GB memory , 500gb disk,
Now I would like to look at A and B roles from Cluster, Grid and from Cloud angle.
Common definitions between Grid and Cloud
1) cluster or Grid are 2 or more computers hooked up together, on hardware level
they are hooked up though network cards and on a software level
it is using some kind of program implementing message passing interface
to make it possible to send commands between nodes.
2) cluster or Grid do NOT combine CPU power or memory between nodes, meaning
that in this simulation a FireFox browser running on A still has only one 2 cores cpu,
8GB memory and 500gb available.
Differences between Grid and Cloud:
1) Cluster only provides fail over part, if A node breaks while FireFox is running
the cluster software will re-start FireFox process on node B.
2) Grid however is able to run a software in parallel on multiple nodes at the same time
provided that software is coded with MPI in mind. It can also lunch any software on any node
on demand (even if it is not written for MPI)
3) Grid is also able to combine different type of
nodes, Linux Server, Windows XP, Xbox and Playstation into one Grid.
Cloud definition:
1) Cloud is not a technical term at all, it is just a short convenient word to describe
a computer of unlimited resources, it can aslo be called a Supercomputer, a Beast, an Ocean or Universe but someone
said "Cloud" first and here we are.
2) Cloud can be based on Grids or on Clusters
3) From technical point of view Cloud is a software to combine hardware resources into one,
meaning that if I install Cloud software on Grid or Cluster then it will combine A and B
and I will get one Cloud like this: 4 core CPU, 16gb memory and 1000gb disk.
edited: 2013.04.02
item 3) was a complete nonsense, cloud will NOT combine resources from many nodes into one huge resource, so in this case there will be no 4 core CPU, 16gb memory and 1000gb cloud.
Grid computing is designed to parcel out large workloads to many participating grid members--through software on each member which is expecting to hear that request for computation or for data, and to reply with it's small piece of the overall puzzle. Applications must be written specifically for this approach to problem-solving. It can be heterogeneous because it's not the OS that matters but the software waiting to hear problem-solving requests.
The expectation of a cluster is that it can run the same executable image across any member node--any node can execute that code--which is what drives its requirement for homogeneity. You can write cluster-aware code which distributes workload throughout the cluster, but again you have to write your code to be cluster aware in order to take advantage of more than the redundancy features of a cluster. As most application vendors do not write cluster-aware code, the simple redundancy feature is all that's commonly used in cluster deployments, but that does not limit the architecture. Clusters can and do share their resources, and can collaborate on tasks simultaneously.
Cloud, as it's commonly defined is neither of these, precisely, but it doesn't preclude them, either. Cloud computing assumes the ability to deploy an application without advanced knowledge of it's underlying operating system, or even control of that operating system, coupled with the ability to expand or reduce the processing and memory footprint available to that application without having to destroy and recreate that environment--all done with enough isolation that the application won't know or be able to know what other applications might be installed or running on it's shared infrastructure, unless that access is approved-of by both application managers.
I would like to answer my question before this is closed as a duplicate because I believe it can be very frustrating to find correct info in regards to clusters,grids and clouds and I think this post can save time for many. If someone wants to challenge it please do so, otherwise I will mark it as answer in 1 week.
1) There are many differences and there are none, it really depends on the technical context but
generally you can connect several nodes and call it a Grid or you can call it Cluster. I would say Grid is a Cluster with extended capabilities, such as ability to connect heterogeneous nodes. Both Grid and Cluster will serve as scale-out platform equally good. From Network Engineer and Programmer perspective the difference in implementation or coding will be pretty big if Gird connects heterogeneous nodes.
2) Now the first question was actually a prelude for second one and I believe it is best answered by
Matt Joyce in this post:
https://stackoverflow.com/a/15286488/2230126
I'll take a crack at it. I have been collecting and saving my notes, scripts, and programs since the year 2002 A.D. This is a chop and paste of my statements over the years. Here is a brain friendly memorization list:
The grid is the hardware and hardware specifications.
a. You plug into the router or switch and setup IP addresses and top-level domains over the internet (which is also known as ICANN).
b. This is like OSI level 1, 2, and 3.
The cluster is the kernel (software ring 0 or 1 if its a virtual type thing going on).
a. The kernel is configured (compiled) to run a network stack that can handle sessions, permission, and account authentication.
b. You set up port to port communications usually over TCP/IP (like in the OSI model).
c. You setup iptables, pf, arp, and other OS level applications or shared objects.
d. You can setup ssh, kerberos, ldap, or some other PKI-database and protocol-socket combo.
e. This is like OSI level 4, 5, and 6.
The cloud is user-space applications.
a. The application processes talk to other application-processes within the cluster.
b. You setup process level permissions (via files, cgroups, and/or user-groups).
c. You setup mysql, redis, riak, Message Brokers, hadoop, apache, nginx, cron, java, haskell, erlang, and etcetera.
d. This is like OSI level 7.
The cloud floats over the cluster that grows from the grid. And actually visually think, cloud in the air, cluster in tree, and grid on the ground. Most of us creative types (which make all these technologies) are visual thinkers that can back it up with mathematical data and code. So always see if you can answer the riddle and correlate technological facsimiles to our physical realm here on Earth.
Intro
Grid, Cluster, and Cloud are three different words that mark their specific time in history. Their definitions have intersecting traits and they are modernly interchangeable. You just need to know when to apply the correct or associated word. For example, I was talking to some older M.D.s (medical doctors) and they wanted to know what the cloud was. So I told them that the cloud was a computer cluster that you rent over the internet. And Bingo, they got the idea within 10 seconds.
I will use a little bit of history in chronological prose.
Grid
The term grid is first used to represent one resource that is repeated across terrestrial landscape or space. The term is frequently used during the distribution of telegraphs where repeaters had to be placed on poles every N radii (plural for radius) to amplify the signal. Another example is the electrical grid that Thomas Edison and Nikola Tesla competitively started spreading around the Earth. Computers got really popular and they soon were expanded across The Grid to replace human telegraph (and telephone) operators.
The Grid is now a bunch of computers that can connect and terminate communication channels. The Grid is an infrastructure of computers that function for one goal which is the run assembly (or binary) code.
Cluster
Farseeing the power of computers and actually witnessing computers win wars (Turing's machine), DARPA (or ARPA which is the U.S.A. Military) stepped in.
DARPA started commissioning universities and colleges to utilize the Grid for multi-plexing communication methods (that use baud and protocols). Universities and colleges started making protocols to separate the different tasks that they wanted to carry out over the Grid and target the computers. That started the modern internet. In-house testing clusters were established in laboratories to simulate the grid. Clusters are great for orchestration. A job can be sub-divided over all or some of the slaves within a cluster. The military utilized the college and university's findings and applied the SOFTWARE to the Grid. There were some gotchas with clusters:
Must be same (or near same) hardware
Must have same operating system
The rules were strict because all the instruction-sets had to be the same passing over the CPUs. Clusters usually had a master and slave type relationship. A Cluster usually ran one unic (or unix) job at a time. Clusters had job-schedulers. Then clusters got more complex because hardware manufacturers started making parallel chip architectures (on top of the Von Neumann arch).
Clusters become more powerful. The Clusters inherited more complexity and people were doing more creative things. Cluster could now do different jobs, tasks, processes, asynchronously processes, synchronized processes, and many more interesting things. One box (or computer node) could run more jobs. Now the Grid could be used for multiple purpose. The rate of software updates on clusters was faster than the actual grid. Clusters were deployed locally on campuses. Clusters started superseding the grid because you could directly produce a public facing stack that out-performed the (national) grid.
My Experience
I went to college during the late 1990s and 2000s and cluster was the word for a physical laboratory of multiple computers working as one virtual computer. Clusters were used for testing. Once your software worked on the cluster, then you could mv (move) it to the production grade Grid. Then I witness network worms and computer viruses control zombie computers. These swarm of zombies could be used as one gigantic virtual cluster used to run commands. Well programmers started DIY (do it yourself) protocols and software like bit-torrent and Napster.
So leaping forward into the future, testing cluster softwares are starting to be replaced by Solaris jails, FreeBSD jails, Linux containers, QEMU, hyper-visors, VMWare, VirtualBox, Vagrant, and Docker.
Cloud
Cloud is a marketing term used to umbrella the hardware of different grids and the software of those clusters. Cloud is one big ubiquitous word used to advertise, promote, and profess all that cluster technology for monetary gains. Cloud is also an effort to wrap all those technologies under one singular word. The Cloud allows multi-tenanted processes to share a gigantic grid. The Cloud maximizes efficiency by sub-dividing the electricity, CPU, RAM, DISK, Electricity, and broadband which gets shared and paid for by consumers. A side effect is that those consumer subscriptions and/or pay-rates started producing profit. The Cloud also allows multiple users to install multiple operating systems that run multiple processes all in the software. So now we have acronyms like IaaS, PaaS, and SasS. The Cloud can replace the start-up cost that was once so darn difficult to fund and bootstrap. The Cloud is a great solution for mock testing your software and building a consumer base for your business.
From another perspective, the Cloud triggers the brain of non-programmers to think a certain way. For example, the human resource department can comprehend and isolate what is presented in-front of them.
So if you got the money, then you can purchase your share of the cloud experience and have easy support along with it. But if you have the skill-set, the time, the quick know-how, and the ability to install your own servers at co-locations, then do that because it is cheaper over the long run.
That is my narrative on the Grid vs Cluster vs Cloud.
I think this link well compared the Cluster and Grid.
As I know, there are some exceptions in the case of Clusters. YARN (Yahoo!) tries to handle mutli-tenancy and distributed scheduling. Also Corona (Facebook) has distributed scheduling.
Related
Can I simulate susceptible-infected-susceptible (SIS) model using NS-3?
I'm aiming to model malware flow using SIS and trying to simulate using NS-3.
I'm a newbie to networks, and have been searching for this since hours, going through tens of research papers but can't find anything similar.
SIS implies some sort of discovery mechanism to spread the infection of a network. These discovery mechanisms typically use exploits in network capable daemons. So, to simulate SIS, you want a simulator that works at the application level of the network stack.
ns-3 has some application-level capabilities, but it's mostly intended to be used to simulate the network stack below the application level. The application-level capabilities that ns-3 does have are limited to traffic generation. Discovering the presence of a service on a Node, let alone a compromised version of a daemon, is not supported.
So, it seems like you'll need to find another simulator. I'm not sure what your options are, but depending on how complex of a simulation you want, you could just roll your own by representing the network as a graph, and infecting a susceptible node with probability p.
"Commercial software routers from companies such as Vyatta can typically only attain transfer data at speeds of up to three gigabits per second. That isn’t fast enough to take advantage of the full speed of a typical network card, which operates at 10 gigabits per second." [1]
How is the speed of the network interface card relevant in this scenario? Aren't software routers connecting multiple Virtual Machines running on the same physical host? [2] Unless a PC has multiple network interface cards, it is unlikely that it functions as a packet switch between different physical hosts.
My interpretation suggests that there seem to exist two different kinds of software routing: (1) Embedding a real time operating system on an actual router. (2) Writing application layer code on a PC that can handle packets being transmitted between different virtual machines running on that very PC. Is this correct?
It depends on what your router is doing. If it's literally just looking at a static route table and forwarding packets out another interface, there isn't much hit in performance.
It's when you get into things like NAT, Crypto, QoS, SPI... that you will see performance degradation. Hardware vendors are usually using custom silicon to process the more advanced features, this allows for higher throughput packet forwarding.
Now that merchant silicon is fast enough and the open source applications are getting better, the performance gap is closing.
It really depends on your use case as far as what you want to use. I've gone with both and not seen performance hits, but the software versions weren't handling high throughput workloads.
Performance of the link from the virtual network to the physical eventually becomes important at any reasonable scale. You're right that, within the same physical host, things can be pretty quick, but that requires that one can get everything needed in one box.
While merchant silicon has come a long way in improving the performance of networking equipment, greater gains are taking place getting CPU's to handle networking tasks better. Both AMD and Intel have improved their architectures to the point where 10 Gbps forwarding is a reality. Intel has developed a specialized library (DPDK Wiki Page) that takes care of a lot of low-level networking functions at high performance.
Is whether sharing the secondary storage one difference between clusters and grids?
I.e.
In a cluster, do all the computers share the disks so the cpus see a single distributed file system?
In a grid, do all the computers not share the disks, so the cpus don't see each other's disk and file system?
Yes, the concept of storage is the main point of difference between cluster systems and grid systems. (The below explanation is copied from "Comparison of Grid Computing vs. Cluster Computing")
Grid systems are loosely coupled(decentralised), whereas cluster systems are tightly coupled.
Grid systems have "distributed job management and scheduling policies", whereas cluster systems are "centralized job management & scheduling system".
The big difference is that a cluster is homogenous while grids are heterogeneous.
The computers that are part of a grid can run different operating systems and have different hardware whereas the cluster computers all have the same hardware and OS. A grid can make use of spare computing power on a desktop computer while the machines in a cluster are dedicated to work as a single unit and nothing else. Grid are inherently distributed by its nature over a LAN, metropolitan or WAN. On the other hand, the computers in the cluster are normally contained in a single location or complex.
Another difference lies in the way resources are handled.
In case of Cluster, the whole system (all nodes) behave like a single system view and resources are managed by centralized resource manager. In case of Grid, every node is autonomous i.e. it has its own resource manager and behaves like an independent entity.
Question - Is whether sharing the secondary storage one difference between
clusters and grids?
As Wikipedia page on Computer cluster states :-
"In most circumstances, all of the nodes use the same hardware[2] and the same operating system, although in some setups (i.e. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, and/or different hardware."
This may not be the typical stackoverflow question.
A colleague of mine has been speculating that flow-based routing is going to be the next big thing in networking. Openflow provides the technology to use low cost switches in large application, IT data-centers, etc; replacing Cisco, HP, etc switch and routers. The theory is that you can create a hierarchy these openflow switches with simple configuration, eg. no spanning tree. Open flow will route each flow to the appropriate switch/switch-port, using only the knowledge of the hierarchy of switches (no routers). The solution is suppose to save enterprises money and simplify networking.
Q. He is speculating that this may dramatically change enterprise networking. For many reasons, I am skeptical. I would like to hear your thoughts.
OpenFlow is a research project from Stanford University led by professor Nick McKeown. In the original OpenFlow research paper, the goal of OpenFlow was to give researchers a way "to run experimental protocols in the networks they use every day." For years networking researchers have had an almost impossible task deploying and evaluating their ideas on real networks with real Ethernet switches and IP routers. The difficultly is that real switches and routers from companies like Cisco, HP, and others, are all closed, proprietary boxes that implement standard "protocols", like Ethernet spanning tree, and OSPF. There are business reasons why Cisco and HP won't let you run software on their switches and routers; there is no technical reason. OpenFlow was invented to solve a people problem: if Cisco is not willing to let you run code on their switch, maybe they can at least provide a very narrow interface to let you remotely configure their switch, and that narrow interface is called OpenFlow.
To my knowledge more than a dozen companies are currently implementing OpenFlow support for their switches. Some like HP are only providing the OpenFlow software for research purposes. Others like NEC are actually offering commercial support.
For academic researchers that want to evaluate new routing protocols in real networks, OpenFlow is a huge win. For switch vendors, it is less clear if OpenFlow support will help, hurt, or have no effect in the long run. After all, the academic research market is very small.
The reason why OpenFlow is most often discussed in the context of enterprise networks is that OpenFlow grew out of a previous research project called Ethane that used OpenFlow's mechanism of remotely programming switches in an enterprise network in order to centralize a security policy. Ethane, and by extension OpenFlow, has led directly to two startup companies: Nicira, founded by Martin Casado, and Big Switch Networks, founded by Guido Appenzeller. It would be easier to implement an Ethane-like system if all of the switches in the network supported OpenFlow.
Closely related to enterprise networks are data center networks, the networks that interconnect thousands to tens of thousands of servers in companies such as Google, Facebook, Microsoft, Amazon.com, and Yahoo!. One problem with Ethernet is that it does not scale to this many servers on the same Layer 2 network. We attempted to solve this problem in a research project called PortLand. We used OpenFlow to facilitate programming the switches from a central controller, which we called a Fabric Manager. We released the PortLand source code as open source.
However, we also found a limitation to OpenFlow's functionality. In another data center networking research project called Helios, we were not able to use OpenFlow because it did not provide a mechanism for bonding multiple switch ports into a Link Aggregation Group (LAG). Presumably one could extend the OpenFlow specification indefinitely until it all possible switch features become exposed.
There are other networks as well such as the Internet access networks, Internet backbones, home networks, wireless networks, cellular networks, etc. Researchers are trying to see where OpenFlow fits into all of these markets. What it really comes down to is the question, "what problem does OpenFlow solve?" Ethane makes a case for enterprise networks but I have not yet seen a compelling case for any other type of network. OpenFlow might be the next big thing, or it might end up being a case of "don't solve a people problem with a technical solution."
In order to assess the future of flow-based networking and OpenFlow, here’s the way to think about it.
It starts with the silicon trends: Moore’s Law (2X transistors per 18-24 months), and a correlated but slower improvement in the I/O bandwidth available on a single chip (roughly 2X every 30-36 months). You can now buy full-featured 10GbE single chip switches with 64 ports, and chips which have a mix of 40GbE and 10GbE ports with comparable total I/O bandwidth.
There are a variety of ways physically connect these in a mesh (ignoring the loop-free constraints of spanning tree and the way Ethernet learns MAC addresses). In the high performance computing (HPC) world, a lot of work has been done building clusters with InfiniBand and other protocols using meshes of small switches to network the compute servers. This is now being applied to Ethernet meshes. The geometry of a CLOS or fat-tree topology enables a two stage mesh with a large number of ports. The math is thus: Where n is the # of ports per chip, the number of devices you can connect in a two-stage mesh is (n*2)/2, and the number you can connect in a three-stage mesh is (n*3)/4. While with standard spanning tree and learning, the spanning tree protocol will disable the multi-path links to the second stage, most of the Ethernet switch vendors have some sort of multi-chassis Link Aggregation protocol which gets around the multi-pathing limitation. There is also standards work in this area. Although it might not be obvious, the vast majority of Link Aggregation schemes allocate traffic so all the frames of any given flow take the same path. This is done in order to minimize out-of-order frames so they don’t get dropped by some higher level protocol. They could have chosen to call this “flow based multiplexing” but instead they call it “link aggregation”.
Although the devil is in the details, there are a variety of data center operators and vendors that have concluded they don’t need to have large multi-slot chassis switches in the aggregation/core layer for server connect, instead using meshes of inexpensive 1U or 2U switches.
People have also concluded that eventually you need some kind of management station to set up the configuration of all the switches. Again, drawing from the experience with HPC and InfiniBand, they use what is called an InfiniBand Controller. In the telecom world, most telecom networks have evolved to separate the management and part of the control plane from the boxes that carry the data traffic.
Summarizing the points above, meshes of Ethernet switches with an external management plane with multipath traffic where flows are kept in order is evolutionary, not revolutionary, and is likely to become mainstream. At least one major company, Juniper, has made a big public statement about their endorsement of this approach. I'd call all of these "flow-based routing".
Juniper and other vendors’ proprietary approaches notwithstanding, this is an area that cries out for standards. The Open Networking Foundation (ONF), was founded to promote standards in this area, starting with OpenFlow. Within a couple of months, the sixty+ members of ONF will be celebrating their first year anniversary. Each member has, I am led to believe, paid tens of thousands of dollars to join. While the OpenFlow protocol has a ways to go before it is widely adopted, it has real momentum.
#Nathan: OpenFlow 1.1 actually adds some primitives that enable the use of multiple links via the Multipath Proposal.
An excellent view of OpenFlow by Simon Crosby
http://community.citrix.com/display/ocb/2011/03/21/The+Rise+of+the+Software+Defined+Network
More context on SDN which discusses IETF's SDN initiative and ONF's Openflow. Working in conjuction is a powerful combination http://bit.ly/A8xYso
Nathan, Excellent historical account and overview of openflow. Thanks!
You've hit on the points that I've been wrapping my head around as to why Openflow might not be widely adopted. Since it was designed to be open to allow researcher the ability to run experimental protocols and not necessarily be "compatible with" the big players Cisco/HP/etc. it puts itself into niche (although potentially big) market, more on this later. And as you've stated it's recieved some adoption in the "cloud data centers (CDC)" e.g. google, facebook, etc because they need to exploit experimental protocols to gain a competitive advantage or optimize for their application.
As you've stated some switch vendors have added openflow capability to capitalize on the niche need in academia and potentially sell into the CDC; google, facebook. This is potentially a big market (or bubble if you're pessimistic).
The problem that I see is that the majority of the market (80% or more) is enterprise IT data centers. The requirements here is for stable, compatible networking. Open and less expensive would be nice, but not at the cost of the former.
One could think of a day where corporate IT is partially or completely cloud-sourced where QoS is maintained by the cloud provider. In this case, experimental protocols could be leveraged to provide a competitive advantaged for speed or QoS. In which case; openflow could play a more dominant roll. I personally think this scenario is many years off.
So, the conclusion I come to is that other than in research and perhaps CDCs (google, facebook), the market is pretty small. I suppose that if researchers use openflow to come up with a better protocol for say link aggregation, or congestion management, then eventually Cisco and HP will provide those in their standard offering because their customers will demand it. So openflow could be a market influencer (via the research community), but it would not be a market disruptor.
Do you agree with my conclusions? Thanks for your input.
Over the years I've worked on a number of microcontroller-based projects; mostly with Microchip's PICs. I've used various microcontroller simulators, and while they can be very helpful at times, I often find myself frustrated. In real life microcontrollers never exist alone and the firmware's behavior is dependent on the environment. However, none of the sims I've used provide decent support for anything outside the microcontroller.
My first thought was to model the entire board in Verilog. But, I'd rather not create an entire CPU model, and I haven't had much luck finding existing models for the chips I use. Regardless, I really don't need, or want, to simulate the proc at that level of detail, and I'd like to retain the debugging facilities provided by a regular processor sim.
It seems to me that the ideal solution would be a hybrid simulator that interfaces a traditional processor simulator with a Verilog model.
Does such a thing exist?
I've used the Altera Nios II processor embedded on a FPGA. Altera provides a toolchain for simulating the CPU (with its software) together with your custom logic in a simulator. I suppose that a similar setup can be achieved by downloading a VHDL/Verilog core of your CPU (Did you try opencores ? They have lots of stuff there).
But keep in mind that it is going to be mind-bogglingly slow, so don't expect to simulate whole complex processes this way. The best you can hope for is simulating fine software-hardware interaction points to debug problems. If you need a deeper simulation, consider running it on a FPGA with built-in monitoring code.
For the "simulate the whole board" approach,
The Free Model Foundry has a large number of models, some in VHDL others in Verilog, that are available now.. but you'll need to pay to have new models created. These are very helpful in being sure the board is built correctly.
But I think the more common approach when debugging your PIC is to just build a board, then work on the firmware. In the chip world, (where the firmware is running on a microprocessor in a chip that hasn't gone to fab yet) people often resort to very expensive systems (or renting time on them) that allow compiling part of the design into an emulator while the rest of the design runs in the normal simulator environment. Without the barrier of an expensive mask set for the chip, the cost is just not justifiable for a Circuit board. Although I've heard of some creative applications of Simulink (Mathworks) with FPGA's, but my recollection is that one either ran the system on the computer, or programmed the device and ran the same thing in realtime.
I believe both Cadence (ask about Palladium) and Mentor Graphics have that integrated solution if you have the money to spend on it.
What I have done recently is create an interface between the simulation environment and host system. Different hdl simulators have different interfaces, and getting the simulator NOT think in batch mode, the traditional simulation model, instead run for ever like a real design is half of the problem.
Then from the host using C (or whatever) you can create abstractions that may or may not allow you to write your application software for whatever target (depending on what language and compiler capabilities you have). For example you can make a generic poke and peek function and on the final target have those actually poke and peek memory or I/O, but for simulation through the abstraction you talk to a testbench in the simulation that simulates the same memory cycle.
I went one step further and used (Berkeley) sockets between the host and test bench so that the simulation can keep running while the host applications stop and start. Not unlike having a real processor with an OS that you are starting applications and running them to completion and starting another. At least for test applications, for delivery you probably only have one app.
By creating these abstraction layers I can write real applications that will be used on the target when it is built. Along the way you can use software simulation of the logic initially, then if you like build an fpga with an abstraction interface (throw away logic) say a uart for example. Replace the shim between the applications abstraction layer and the simulator with a uart interface, or whatever. Then when you marry the processor and logic in the same chip or on the same board, replace the abstraction layer again with direct calls to whatever interfaces they have always though they were talking to. If something breaks and you have retained the abstraction layer you can take the application back to the simulation model and have access to all of your logic internals.
Specifically this time around I am using a hdl language cyclicity cdl which is on sourceforge, the documentation needs some help but the examples may get you going, and it produces synthesizable verilog, so you get an extra win there. I threw out all the scripting batch stuff other than the bare minimum needed to connect and start a C simulation model. So my test bench is in C (well C++ technically) the sockets layer was done there. The output can be .vcd files which gtkwave uses. Basically you can do the bulk of your HDL design using open source software with no licenses, etc. By adding one or two lines of code to the CDL simulation part I was able to have it run as an infinite loop, which I can say works quite well, there doesnt appear to be any memory leaks, etc.
both modelsim and cadence have standardized ways of connecting host C programs to the simulation world and from there you can use an IPC to get to host applications talking to an abstraction layer api.
this is probably way overkill for a pic, I have given up pics a while ago for the faster and C friendly arm based micros anyway. There is/was an open core pic that you could simply incorporate into your simulation, even though that is not what you are trying to do here.
Not that I've seen. Your best bet is to properly define the interfaces and behavior between the uC and FPGA and then define a series of test waveforms that can be applied using an automated tester. You would have to make the automated tester (or perhaps a logic analyzer may have some such functionality) out of an FPGA or uC (apply waveform, watch interrupts, breakpoints, etc). If you really want I know that Opencores.org has PIC and AVR-like 8-bit uC cores defined as VHDL, so you could implement your entire project on the FPGA and then just debug that.
Generally there isn't need to model the CPU at the RTL level. Since you don't really care about what it does bit by bit; you generally care about what it does, e.g. register values, memories and bus access.
The simplest is call at Bus Functional Model. This just generates the read and writes that the CPU does, often based on a text file. These are available for some CPUs and many popular buses (e.g. PCI, PCIe). THese simulate super fast.
The next step up is a functional cycle-accurate model. Those simulate fast. They are often encrypted.
Last is a full RTL model. Those usually are only available if you are working closely with the CPU vendor, e.g. using their core in your ASIC. Typically these are encrypted, unless you are a huge company.
Memory models are typically cycle-accurate (e.g. Micron).
My workmates from the hardware department use FPGA simulation software quite often to find timing-bugs and trace down strange behaviours.
Simulating one or two milliseconds can take several hours, so using the simulator for anything but very small things is not feasable.
You may want to have a look at SystemC though. http://en.wikipedia.org/wiki/SystemC