I am trying to map a region of fpga memory to host system,
resource0 = os.open("/sys/bus/pci/devices/0000:0b:00.0/resource0", os.O_RDWR | os.O_SYNC)
resource_size = os.fstat(resource0).st_size
mem = mmap.mmap(resource0, 65536, flags=mmap.MAP_SHARED, prot=mmap.PROT_WRITE|mmap.PROT_READ, offset= 0 )
If i flush my host page with
mem.flush()
then print the contents
the data is same as before,
nothing is getting cleared from page
print(mem[0:131072])
mem.flush()
print(mem[0:131072])
as i read on python mmap docs , it says it clears then content,
https://docs.python.org/3.6/library/mmap.html
but when i test it remains same
i am using python 3.6.9
Why do you expect flush to clear a page?
https://docs.python.org/2/library/mmap.html
flush([offset, size])
Flushes changes made to the in-memory copy of a file back to disk. Without use of this call there is no guarantee that changes are written back before the object is destroyed. If offset and size are specified, only changes to the given range of bytes will be flushed to disk; otherwise, the whole extent of the mapping is flushed. offset must be a multiple of the PAGESIZE or ALLOCATIONGRANULARITY.
So if you want to clear anything you have to assign a new value first and then write it to the memory i.e. flush it.
Related
I am trying to figure out why I am getting this error when indexing a document from a python web app.
The document in this case is a base64 encoded string of a file of size 10877 KB.
I post it to my web app, which then posts it via elasticsearch.py to my elastic instance.
My elastic instance throws an error:
TransportError(429, 'circuit_breaking_exception', '[parent] Data
too large, data for [<http_request>] would be
[1031753160/983.9mb], which is larger than the limit of
[986932838/941.2mb], real usage: [1002052432/955.6mb], new bytes
reserved: [29700728/28.3mb], usages [request=0/0b,
fielddata=0/0b, in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb]')
I am trying to understand why my 10877 KB file ends up at a size of 983mb as reported by elastic.
I understand that increasing the JVM max heap size may allow me to send bigger files, but I am more wondering why it appears the request size is 10x the size of what I am expecting.
Let us see what we have here, step by step:
[parent] Data too large, data for [<http_request>]
gives the name of the circuit breaker
would be [1031753160/983.9mb],
says, how the heap size will look, when the request would be executed
which is larger than the limit of [986932838/941.2mb],
tells us the current setting of the circuit breaker above
real usage: [1002052432/955.6mb],
this is the real usage of the heap
new bytes reserved: [29700728/28.3mb],
actually an estimatiom, what impact the request will have (the size of the data structures which needs to be created in order to process the request). Your ~10MB file will probably consume 28.3MB.
usages [
request=0/0b,
fielddata=0/0b,
in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb
]
This last line tells us how the estmation is being calculated.
could, please, someone take a quick look, what is wrong with my code, that is supposed to store cache in Off Heap Memory through GridGain?
My configuration is quite the same, as on the wiki page (http://doc.gridgain.org/latest/Off-Heap+Memory)
My configuration is following:
<!-- Enable OffHeap -->
<property name="offHeapMaxMemory" value="#{2L * 1024L * 1024L * 1024L}"/>
<!-- Always store cache entries in off-heap memory, evict to Swap. -->
<property name="memoryMode" value="OFFHEAP_TIERED"/>
However, jconsole shows, that data is still written to the heap memory, as it is fluctuating and when I try to get data, that I've stored, I get the zero result.
Code is following:
final GridCache<String, Object> cache = grid.cache("partitioned");
for (long i = 0; i < 1000; i++) {
cache.putx(String.valueOf(i), hundredBytes.clone());
if (i % 1024 * 1024 == 0) {
System.out.println(i + "bytes inserted");
}
}
System.out.println("Cache size: " + cache.size());
The last one line shows me "Cache size: 0". That's strange, probably, I do not quite understand, how to access Off Heap memory. Is there other/separate API for that?
Thanks in advance
Correct, there is another one API
GridCache<String, Object> cache = grid.cache("partitioned");
Iterator<Map.Entry<String, Object>> localIterator = cache.offHeapIterator();
However, pay attention, that 'grid.cache("partitioned")' in case of storing data on heap returns the cache for whole cluster, but 'cache.offHeapIterator()' in case of storing data off heap returns only data for the local node in a cluster.
This confused me a bit.
There are a few points about the on-heap/off-heap difference:
There is no difference between on-heap and off-heap data storage when you access your data by key: grid.cache("partitioned") still works with the cache of the whole cluster, not just local entries. You can access your data by key using cache.get(...) methods regardless of the node you execute your code on.
There are different methods which returns local size of on-heap and off-heap memory. cache.size() returns number of on-heap entries stored in cache. cache.offHeapEntriesCount() returns number of entries stored in off-heap storage. cache.swapKeys() returns number of entries stored in swap.
Entry iterators for on-heap, off-heap and swap always return local entries for the node. If you need a whole cache iteration, you can use scan query or broadcast a closure which will iterate over local entries.
I am working on an opencl project and I have run into an issue where if I try and send data to from the cpu to global memory then sometimes it locks up the application. This happens sporadically. I can run it x times in a row and the next time it locks. It only appears to be happening if I try and send data that is not 32 bits wide. For example, I can send float and int just fine, but when I try short, char, or half then I get random lockups. It is not dying with badly initialized data or something, because it does run, just not all the time. I also put some logging in and found that it is always locking up just after attempting to write one of these non-standard size data arrays. I am running on an NVIDIA GeForce GT 330M. Below is a snippet of the code I am running to send the data. I am using the c++ interface on the host side.
cl_half *m_aryTest;
shared_ptr< cl::Buffer > m_bufTest;
m_aryTest = new cl_half[m_iNeuronCount];
m_bufTest = shared_ptr<cl::Buffer>( new cl::Buffer(m_lpNervousSystem->ActiveContext(), CL_MEM_READ_ONLY | CL_MEM_USE_HOST_PTR, sizeof(m_aryTest)*m_iNeuronCount, m_aryTest));
kernel.setArg(8, *(m_bufTest.get()));
printf("m_bufTest.\n");
m_lpQueue->enqueueWriteBuffer(*(m_bufTest.get()), CL_TRUE, 0, sizeof(m_aryTest)*m_iNeuronCount, m_aryTest, NULL, NULL);
Does anyone have any ideas on why this is happening?
Thanks
Here I have a simple feature on ASP.NET MVC3 which host on Azure.
1st step: user upload a picture
2nd step: user crop the uploaded picture
3rd: system save the cropped picture, delete the temp file which is the uploaded original picture
Here is the problem I am facing now: where to store the temp file?
I tried on windows system somewhere, or on LocalResources: the problem is these resources are per Instance, so here is no guarantee the code on an instance shows the picture to crop will be the same code on the same instance that saved the temp file.
Do you have any idea on this temp file issue?
normally the file exist just for a while before delete it
the temp file needs to be Instance independent
Better the file can have some expire setting (for example, 1H) to delete itself, in case code crashed somewhere.
OK. So what you're after is basically somthing that is shared storage but expires. Amazon have just announced a rather nice setting called object expiration (https://forums.aws.amazon.com/ann.jspa?annID=1303). Nothing like this for Windows Azure storage yet unfortunately, but, doesnt mean we can't come up with some other approach; indeed even come up with a better (more cost effective) approach.
You say that it needs to be instance independant which means using a local temp drive is out of the picture. As others have said my initial leaning would be towards Blob storage but you will have cleanup effort there. If you are working with large images (>1MB) or low throughput (<100rps) then I think Blob storage is the only option. If you are working with smaller images AND high throughput then the transaction costs for blob storage will start to really add up (I have a white paper coming out soon which shows some modelling of this but some quick thoughts are below).
For a scenario with small images and high throughput a better option might be to use the Windows Azure Cache as your temporary storaage area. At first glance it will be eye wateringly expensive; on a per GB basis (110GB/month for Cache, 12c/GB for Storage). But, with storage your transactions are paid for whereas with Cache they are 'free'. (Quotas are here: http://msdn.microsoft.com/en-us/library/hh697522.aspx#C_BKMK_FAQ8) This can really add up; e.g. using 100kb temp files held for 20 minutes with a system throughput of 1500rps using Cache is about $1000 per month vs $15000 per month for storage transactions.
The Azure Cache approach is well worth considering, but, to be sure it is the 'best' approach I'd really want to know;
Size of images
Throughput per hour
A bit more detail on the actual client interaction with the server during the crop process? Is it an interactive process where the user will pull the iamge into their browser and crop visually? Or is it just a simple crop?
Here is what I see as a possible approach:
user upload the picture
your code saves it to a blob and have some data backend to know the relation between user session and uploaded image (mark it as temp image)
display the image in the cropping user interface interface
when user is done cropping on the client:
4.1. retrieve the original from the blob
4.2. crop it according the data sent from the user
4.3. delete the original from the blob and the record in the data backend used in step 2
4.4. save the final to another blob (final blob).
And have one background process checking for "expired" temp images in the data backend (used in step 2) to delete the images and the records in the data backend.
Please note that even in WebRole, you still have the RoleEntryPoint descendant, and you still can override the Run method. Impleneting the infinite loop in the Run() (that method shall never exit!) method, you can check if there is anything for deleting every N seconds (depending on your Thread.Sleep() in the Run().
You can use the Azure blob storage. Have look at this tutorial.
Under sample will be help you.
https://code.msdn.microsoft.com/How-to-store-temp-files-in-d33bbb10
you have two way of temp file in Azure.
1, you can use Path.GetTempPath and Path.GetTempFilename() functions for the temp file name
2, you can use Azure blob to simulate it.
private long TotalLimitSizeOfTempFiles = 100 * 1024 * 1024;
private async Task SaveTempFile(string fileName, long contentLenght, Stream inputStream)
{
try
{
//firstly, we need check the container if exists or not. And if not, we need to create one.
await container.CreateIfNotExistsAsync();
//init a blobReference
CloudBlockBlob tempFileBlob = container.GetBlockBlobReference(fileName);
//if the blobReference is exists, delete the old blob
tempFileBlob.DeleteIfExists();
//check the count of blob if over limit or not, if yes, clear them.
await CleanStorageIfReachLimit(contentLenght);
//and upload the new file in this
tempFileBlob.UploadFromStream(inputStream);
}
catch (Exception ex)
{
if (ex.InnerException != null)
{
throw ex.InnerException;
}
else
{
throw ex;
}
}
}
//check the count of blob if over limit or not, if yes, clear them.
private async Task CleanStorageIfReachLimit(long newFileLength)
{
List<CloudBlob> blobs = container.ListBlobs()
.OfType<CloudBlob>()
.OrderBy(m => m.Properties.LastModified)
.ToList();
//get total size of all blobs.
long totalSize = blobs.Sum(m => m.Properties.Length);
//calculate out the real limit size of before upload
long realLimetSize = TotalLimitSizeOfTempFiles - newFileLength;
//delete all,when the free size is enough, break this loop,and stop delete blob anymore
foreach (CloudBlob item in blobs)
{
if (totalSize <= realLimetSize)
{
break;
}
await item.DeleteIfExistsAsync();
totalSize -= item.Properties.Length;
}
}
I am running a simple query to get data out of my database & display them. I'm getting an error that says Response Buffer Limit Exceeded.
Error is : Response object error 'ASP 0251 : 80004005'
Response Buffer Limit Exceeded
/abc/test_maintenanceDetail.asp, line 0
Execution of the ASP page caused the Response Buffer to exceed its configured limit.
I have also tried Response.flush in my loop and also use response.buffer = false in my top of the page, but still I am not getting any data.
My database contains 5600 records for that, Please give me some steps or code to solve the issue.
I know this is way late, but for anyone else who encounters this problem: If you are using a loop of some kind (in my case, a Do-While) to display the data, make sure that you are moving to the next record (in my case, a rs.MoveNext).
Here is what a Microsoft support page says about this:
https://support.microsoft.com/en-us/help/944886/error-message-when-you-use-the-response-binarywrite-method-in-iis-6-an.
But it’s easier in the GUI:
In Internet Information Services (IIS) Manager, click on ASP.
Change Behavior > Limits Properties > Response Buffering Limit from 4 MB to 64 MB.
Apply and restart.
The reason this is happening is because buffering is turned on by default, and IIS 6 cannot handle the large response.
In Classic ASP, at the top of your page, after <%#Language="VBScript"%> add:
<%Response.Buffer = False%>
In ASP.NET, you would add Buffer="False" to your Page directive.
For example:
<%#Page Language="C#" Buffer="False"%>
I faced the same kind of issue, my IIS version is 8.5. Increased the Response Buffering Limit under the ASP -> Limit Properties solved the issue.
In IIS 8.5, select your project, you can see the options in the right hand side. In that under the IIS, you can see the ASP option.
In the option window increase the Response Buffering Limit to 40194304 (approximately 40 MB) .
Navigate away from the option, in the right hand side top you can see the Actions menu, Select Apply. It solved my problem.
If you are not allowed to change the buffer limit at the server level, you will need to use the <%Response.Buffer = False%> method.
HOWEVER, if you are still getting this error and have a large table on the page, the culprit may be table itself. By design, some versions of Internet Explorer will buffer the entire content between before it is rendered to the page. So even if you are telling the page to not buffer the content, the table element may be buffered and causing this error.
Some alternate solutions may be to paginate the table results, but if you must display the entire table and it has thousands of rows, throw this line of code in the middle of the table generation loop: <% Response.Flush %>. For speed considerations, you may also want to consider adding a basic counter so that the flush only happens every 25 or 100 lines or so.
Drawbacks of not buffering the output:
slowdown of overall page load
tables and columns will adjust their widths as content is populated (table appears to wiggle)
Users will be able to click on links and interact with the page before it is fully loaded. So if you have some javascript at the bottom of the page, you may want to move it to the top to ensure it is loaded before some of your faster moving users click on things.
See this KB article for more information http://support.microsoft.com/kb/925764
Hope that helps.
Thank you so much!
<%Response.Buffer = False%> worked like a charm!
My asp/HTML table that was returning a blank page at about 2700 records. The following debugging lines helped expose the buffering problem: I replace the Do While loop as follows and played with my limit numbers to see what was happening:
Replace
Do While not rs.EOF
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
with
Do While reccount < 2500
if rs.EOF then recount = 2501
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
response.write "recount = " & recount
raise or lower the 2500 and 2501 to see if it is a buffer problem. for my record set, I could see that the blank page return, blank table, was happening at about 2700 records, good luck to all and thank you again for solving this problem! Such a simple great solution!
You can increase the limit as follows:
Stop IIS.
Locate the file %WinDir%\System32\Inetsrv\Metabase.xml
Modify the AspBufferingLimit value. The default value is 4194304, which is about 4 MB.
Changing it to 20MB (20971520).
Restart IIS.
One other answer to the same error message (this just fixed my problem) is that the System drive was low on disk space. Meaning about 700kb free. Deleting a lot of unused stuff on this really old server and then restarting IIS and the website (probably only IIS was necessary) cause the problem to disappear for me.
I'm sure the other answers are more useful for most people, but for a quick fix, just make sure that the System drive has some free space.
I rectified the error 'ASP 0251 : 80004005' Response Buffer Limit as follow:
To increase the buffering limit in IIS 6, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs SET w3svc/aspbufferinglimit LimitSize
Note LimitSize represents the buffering limit size in bytes. For example, the number 67108864 sets the buffering limit size to 64 MB.
To confirm that the buffer limit is set correctly, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs GET w3svc/aspbufferinglimit
refers to https://support.microsoft.com/en-us/kb/944886
If you are looking for the reason and don't want to fight the system settings, these are two major situations I faced:
You may have an infinite loop without next or recordest.movenext
Your text data is very large but you think it is not! The common reason for this situation is to copy-paste an Image from Microsoft word directly into the editor and so the server translates the image to data objects and saves it in your text field. This can easily occupies the database resources and causes buffer problem when you call the data again.
In my case i just have writing this line before rs.Open .....
Response.flush
rs.Open query, conn
It can be due to CursorTypeEnum also. My scenario was the initial value equal to CursorTypeEnum.adOpenStatic 3.
After changed to default, CursorTypeEnum.adOpenForwardOnly 0, it backs to normal.