I'm doing some experimenting with UEFI and haven't been able to wrap my head around the virtual addressing..
I have written uefi application that contains the string "CatsAreAwesome". I have the application print the virtual address of that string. It varies each execution so i'll stick with one specific example in this. The code prints that the string is at virtual address 0x120ac3c0. If I pause the VM and scan the vmem file I find two instances of the string at address 0x1209e410 and 0x12ab000
From calling getmemorymap in UEFI I find that the memory section those two fall in are
TYPE PhysStart PhysEnd VirtStart VirtEnd
EfiConventionalMemory 1209C000 120A4000 0 8000
EfiLoaderCode 120A4000 120B1000 0 D000
I don't understand how the translation is working. Virtual start for those two sections are 0, which I would have thought meant identity mapped, but the virtual and physical addresses don't line up so that's obviously not correct. Can someone explain to me how the translation is working? How would I go from virtual to physical or vise versa?
My application prints the string, and gathers the memorymap so the map was gathered while the application was running. The application then waits for user input, I paused the vm during this time, so the physical addresses were found while the application was running.
Looking at the EDK2 code, it seems that VirtualStart is always set to zero until the SetVirtualAddressMap runtime services function is called.
During Boot Services, UEFI and its applications are always running identity mapped. UEFI does not run address-translated, but permits bits that have been left resident to run at a virtual address assigned to them by an external agent.
After a successful call to ExitBootServices(), you can call SetVirtualAddressMap() to re-apply relocations and make the code possible to run at a given virtual address. The expected use-case for this is for providing Runtime Services while in an operating system context.
Related
I am calling a shell script that does some processing from JCL using BPXBATCH like this:
//STEP2 EXEC PGM=BPXBATCH,
// PARM='SH PATHTOSCRIPT.SH MYARGUMENT'
The JCL has the service class with the highest priority. However, the shell script enters in a queue waiting for resources. Sometimes it runs quickly, and other times waits a lot of time for resources. The priority of the JCL seems to be independent of the shell script. I read maybe using the "nice" command in Unix would increase the priority of the shell script.
I want to be sure first, that the priority of a JCL from z/OS doesn't affect the priority of Unix process that was called from that JCL through BPXBATCH. I cannot find any documentation about it.
Short Answer
To answer your question first: BPXBATCH runs in one address space, and the shell runs in a second address space. Commands issues by the shell may run in the same address space as the shell, or may run in more additional address spaces.
The BPXBATCH address space has got a service class, and the shell address space(s) has got a service class, probably a different one. Each service class has its own performance goal, and this tells the system how to manage that work.
Detailed Answer
The z/OS workload manager (WLM) is responsible to assign work to a service classes when it is presented the new work. Service classes specify performance goals, and importance levels, not priorities. WLM manages all work in the system according to is performance goal based on the importance of the goal.
There are a couple (workload management) subsystems, that may start new work. Examples of such subsystems are
JES, which manage batch work, i.e. batch jobs.
TSO, which manages interactive TSO user work (TSO login).
OMVS, which manages forked, and non-locally spawned z/OS UNIX work.
STC, which manages started job workload.
This list is not complete; I listed only the subsystems that I need to answer the question.
When JES2/3 receives a job that shall run on the system, it presents some job attributes to WLM, and WLM assigns the job to a service class. It does so using WLM classification rules for subsystem type JES, and the attributes given.
Everything that runs in this job, i.e. in the job's address space will be managed towards the performance goal of the sercive class assigned. This includes z/OS UNIX work that is run in this very address space, i.e. work that is not started via UNIX fork(), or non-local spawn().
When a z/OS UNIX process starts an new process via fork(), or via non-local spawn(), this new work is handled by the WLM subsystem OMVS. The OMVS subsystem presents some attributes of the new process to WLM, and WLM assigns the process to a service class. It does so using WLM classification rules for subsystem type OMVS, and the attributes given. This kind of work is always runs in a separate, new address space.
BPXBATCH starts the (first) UNIX command it is told via PARM=, or //STDPARM, as a new process using either fork(), or spawn(). The spawn() may be a local spawn(), or a non-local spawn(). Which one is done depends on many factors, too complex to explain here.
The important point here is, when running BPXBATCH with PARM='SH ...', the shell proces will always run in a separate, new address space and will be classified via WLM subsystem OMVS.
The result is BPXBATCH is running in one address space with its service class, and the shell is run in a second address space with its service class. The service classes may be the same, but usually they are different WLM defintions with different performance goals.
As a starter, have a look at z/OS MVS Planning: Workload Management
nice() on z/OS UNIX
nice() has no effect on z/OS UNIX, unless the system has been setup to support it. There is parameter PRIORITYGOAL(...) in BPXPRMxx parmlib member to setup a list of up to 40 WLM service classes that will be used in conjunction with nice(). I have never heard of anyone having set this parameter.
See z/OS MVS Initialization & Tuning Reference for details about BPXPRMxx member
I hava scenario like this:
I have applications A,B,C,D..., and I hava physical machines M,N,O,P,Q...
I use byon to manage physical machine, because the physial machine is "strong", so I want to deploy several application on it, so I set the SLA is global, at this time I have a question: when application A is deployed on machine M, I deploy other application B,C,D...,whether application A,B,C,D...will install on M machine only, rather than install on machine N,O,P,Q...(in this case, the host A's pressure will be very large.)
Is this problem exist, if exists, how to resolve it? thank you very much!
It's possible to limit the number of services on a specific machine by specifying the memory required for each service. As part of the global isolation SLA You can set the amount of memory required by each service, so when there isn't enough memory left on the machine - the next one will be used.
The syntax is:
isolationSLA {
global {
instanceCpuCores 0
instanceMemoryMB 128 // each instance needs 128MB allocated for it on the VM.
useManagement true // Enables installing services on the management server. Defaults to false.
}
Please note that the above code also allows services to be installed on the management machine itself, which you can set to false.
A more detailed explanation is available here, under "Isolation SLA".
I am using ubuntu 12.10 32 bit on an x86 system. I have physical memory(about 32MB,sometimes more) which is enumerated and reserved through the ACPI tables as a device so that linux/OS cannot use it. I have a Linux driver for this memory device. THe driver implements mmap() so that when a process calls mmap(), the driver can map this reserved physical memory to user space. I also sometimes do nothing in the mmap except setup the VMA's and point vma->vmops to the vm_operations_struct with the open close and fault functions implemented. When the application accesses the mmapped memory, I get a page fault and my .fault function is called. Here is use vm_insert_pfn to map the virtual address to any physical address in the 32MB that I want.
Here is the problem I have: In the driver, if I call ioremap_cache() during init, I get good cache performance from the application when I access data in this memory. However, if I don't call ioremap_cache(), I see that any access to these physical pages results in a cache miss and gives horrible performance. I looked into the PTE's and see that the PCD bit for these virtual address->physical translation are set, which means caching on these physical pages is disabled. We tried setting _PAGE_CACHE_WB in the vma_page_prot field and also used remap_pfn_range with the new vma_page_prot but PCD bit was still set in the PTE's.
Does anybody have any idea on how we can ensure caching is enabled for this memory? The reason I don't want to use ioremap_cache() for 32 MB is because there are limited Kernel Virtual Address on 32bit systems and I don't want to hold them.
Suggestions:
Read linux/Documentation/x86/pat.txt
Boot Linux with debugpat
After trying the set_memory_wb() APIs, check /sys/kernel/debug/x86/pat_memtype_list
I am aware that nodes can be started from the shell. What I am looking for is a way to start a remote node from within a module. I have searched, but have been able to find nothing.
Any help is appreciated.
There's a pool(3) facility:
pool can be used to run a set of
Erlang nodes as a pool of
computational processors. It is
organized as a master and a set of
slave nodes..
pool:start/1,2 starts a new pool.
The file .hosts.erlang is read to
find host names where the pool nodes
can be started. The slave nodes are
started with slave:start/2,3,
passing along Name and, if provided,
Args. Name is used as the first
part of the node names, Args is used
to specify command line arguments.
With pool you get load distribution facility for free.
Master node may be started this way:
erl -sname poolmaster -rsh ssh
Key -rsh here specifies an alternative to rsh for starting a slave node on a remote host. We used SSH here. Make sure your box have working SSH keys, and you can authenticate to the remote hosts using these keys.
If there are no hosts in the file .hosts.erlang, then no slave nodes are started, and you can use slave:start/2,3 to start slave nodes manually passing arguments if needed.
You could, for example start a remote node:
Arg = "-mnesia_dir " ++ M,
slave:start(H, Name, Arg).
Ensure epmd(1) is up and running on the remote boxes in order to start Erlang nodes.
Hope that helps.
A bit more low level that pool is the slave(3) module. Pool builds upon the functionality in slave.
Use slave:start to start a new slave.
You should probably also specify -rsh ssh on the command-line.
So use pool if you need the kind of functionality it offers, if you need something different you can build it yourself out of slave.
I want to develop a web application using ASP.NET running on IIS.
If a user submits a MAXIMA input command, the code behind will ask a custom windows service to create a new distinct temporary process executing an external assembly.
More precisely, there is only one windows service serving for all users, but each user will be associated with a distinct, temporary process running an external assembly.
The windows service contains a single socket listening on a certain port and a list of asynchronous sockets for communication. Each socket of the list will communicate with a distinct, temporary process running an external assembly which works as a client socket.
Note that: I use a process rather than an application domain because the external assembly is a batch file (not managed assembly).
My questions are:
How to call windows service from code behind?
How to associate each user with a distinct, temporary process?
How to improve scalability if there are more and more users working simultaneously?
If the Maxima input command entered by a user cause long-running process, what is the wise way to notify the user about the progress?
The following link provide you with more detail about my project: https://sourceforge.net/projects/aspmaxima/forums/forum/1190702/topic/3786806
Thank you in advance.
You should not be using codebehind in an MVC app.
Scalability while interoprating with unmanaged code is hard. The only sane way to do this is to decompose the problem.
When you launch an unmanaged app, it already has its own process.
Multiple task flows in a service called from a web app, with monitoring? You're describing Windows Server AppFabric. Host your service with AppFabric, and you won't have to write all of this yourself.
Regarding scalability, when you're dealing with unmanaged processes, you're going to have to limit the number which can start concurrently. Trial and error will be necessary to determine the optimum on specific hardware.
You can only monitor an unmanaged task's progress if that app specifically provides for it.
Launching arbitrary unmanaged code from a service is dangerous, because the launched app, by default, inherits the service's (typically raised) permissions. Consider using specific, limited credentials for the launched app instead of the default.