We are using Gatling with a very simple scenario: reading urls from a CSV file and invoking them.
We get a throughput of ~18K requests/secs .
Are there any ideas of how to push this number up?
We tried putting the Keep-Alive header in order to refrain from the overhead of open/close of connections, but it doesn't help.
Here's our code:
class MySimulation extends Simulation {
val httpProtocol = http
.baseURL("http://localhost:9090/")
val csvFeeder = csv("uniq_urls_500.csv").random
val scn = scenario("MySimulation")
.feed(csvFeeder)
.repeat(10000) {
exec(http("request_0")
.get("?loc=${Url}")
.header("Keep-Alive", "1500000")
)
}
setUp(scn.inject(
rampUsers(100) over(5 seconds)
)).protocols(httpProtocol)
}
Increase the number of users for your test scenario from 100 to a higher number to increase load on your server.
Ensure your box from where you running gatling test can handle that much.
If the box is struggling then you can execute gatling from multiple boxes.
Related
Since dotnet core 3 preview 9, I am facing an issue invoking a dotnet method passing a large string from JavaScript.
Code is worth more than a thousand words, so the snippet below reproduces the issue. It works when length = 1 * mb but fails when length = 2 * mb.
#page "/repro"
<button onclick="const mb = 1024 * 1024; const length = 2 * mb;console.log(`Attempting length ${length}`); DotNet.invokeMethod('#GetType().Assembly.GetName().Name', 'ProcessString', 'a'.repeat(length));">Click Me</button>
#functions {
[JSInvokable] public static void ProcessString(string stringFromJavaScript) { }
}
The error message is:
Uncaught RuntimeError: memory access out of bounds
at wasm-function[2639]:18
at wasm-function[6239]:10
at Module._mono_wasm_string_from_js (http://localhost:52349/_framework/wasm/mono.js:1:202444)
at ccall (http://localhost:52349/_framework/wasm/mono.js:1:7888)
at http://localhost:52349/_framework/wasm/mono.js:1:8238
at Object.toDotNetString (http://localhost:52349/_framework/blazor.webassembly.js:1:39050)
at Object.invokeDotNetFromJS (http://localhost:52349/_framework/blazor.webassembly.js:1:37750)
at u (http://localhost:52349/_framework/blazor.webassembly.js:1:5228)
at Object.e.invokeMethod (http://localhost:52349/_framework/blazor.webassembly.js:1:6578)
at HTMLButtonElement.onclick (<anonymous>:2:98)
I need to process large strings, which represent the content of a file.
Is there a way to increase this limit?
Apart from breaking down the string into multiple segments and performing multiples calls, is there any other way to process a large string?
Is there any other approach for processing large files?
This used to work in preview 8.
Is there a way to increase this limit?
No (unless you modify and recompile blazor and mono/wasm that is).
Apart from breaking down the string into multiple segments and performing multiples calls, is there any other way to process a large string?
Yes, as you are on the client side, you can use the shared memory techniques. You basically map a .net byte[] to an ArrayBuffer. See this (disclaimer: My library) or this library for reference on how to do it. These examples are using the binary content of actual javascript Files but it's applicable to strings as well. There is no reference documentation on these API's yet. Mostly just examples and the blazor source code.
Is there any other approach for processing large files?
See 2)
I recreated your issue in a netcore 3.2 Blazor app (somewhere between 1 and 2 Mb of data kills it just as you described). I updated the application to netcore 5.0 and the problem is fixed (it was still working when I threw 50Mb at it).
I wrote code in AX 2009 to poll a directory on a network drive, every 1 second, waiting for a response file from another system. I noticed that using a file explorer window, I could see the file appear, yet my code was not seeing and processing the file for several seconds - up to 9 seconds (and 9 polls) after the file appeared!
The AX code calls System.IO.Directory::GetFiles() using ClrInterop:
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
files = System.IO.Directory::GetFiles(#POLLDIR,'*.csv');
// etc...
CodeAccessPermission::revertAssert();
After much experimentation, it emerges that the first time in my program's lifetime, that I call ::GetFiles(), it starts a notional "ticking clock" with a period of 10 seconds. Only calls every 10 seconds find any new files that may have appeared, though they do still report files that were found on an earlier 10s "tick" since the first call to ::GetFiles().
If, when I start the program, the file is not there, then all the other calls to ::GetFiles(), 1 second after the first call, 2 seconds after, etc., up to 9 seconds after, simply do not see the file, even though it may have sitting there since 0.5s after the first call!
Then, reliably, and repeatably, the call 10s after the first call, will find the file. Then no calls from 11s to 19s will see any new file that might have appeared, yet the call 20s after the first call, will reliably see any new files. And so on, every 10 seconds.
Further investigation revealed that if the polled directory is on the AX AOS machine, this does not happen, and the file is found immediately, as one would expect, on the call after the file appears in the directory.
But this figure of 10s is reliable and repeatable, no matter what network drive I poll, no matter what server it's on.
Our network certainly doesn't have 10s of latency to see files; as I said, a file explorer window on the polled directory sees the file immediately.
What is going on?
Sounds like your issue is due to SMB caching - from this technet page:
Name, type, and ID
Directory Cache [DWORD] DirectoryCacheLifetime
Registry key the cache setting is controlled by
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters
This is a cache of recent directory enumerations performed by the
client. Subsequent enumeration requests made by client applications as
well as metadata queries for files in the directory can be satisfied
from the cache. The client also uses the directory cache to determine
the presence or absence of a file in the directory and uses that
information to prevent clients from repeatedly attempting to open
files which are known not to exist on the server. This cache is likely
to affect distributed applications running on multiple computers
accessing a set of files on a server – where the applications use an
out of band mechanism to signal each other about
modification/addition/deletion of files on the server.
In short try to set the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters\DirectoryCacheLifetime
to 0
Thanks to #Jan B. Kjeldsen , I have been able to solve my problem using FileSystemWatcher. Here is my implementation in X++ :
class SelTestThreadDirPolling
{
}
public server static Container SetStaticFileWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
InteropPermission interopPerm;
System.IO.FileSystemWatcher fw;
System.IO.WatcherChangeTypes watcherChangeType;
System.IO.WaitForChangedResult res;
Container cont;
str fileName;
str oldFileName;
str changeType;
;
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
fw = new System.IO.FileSystemWatcher();
fw.set_Path(_dirPath);
fw.set_IncludeSubdirectories(false);
fw.set_Filter(_filenamePattern);
watcherChangeType = ClrInterop::parseClrEnum('System.IO.WatcherChangeTypes', 'Created');
res = fw.WaitForChanged(watcherChangeType,_timeoutMs);
if (res.get_TimedOut()) return conNull();
fileName = res.get_Name();
//ChangeTypeName can be: Created, Deleted, Renamed and Changed
changeType = System.Enum::GetName(watcherChangeType.GetType(), res.get_ChangeType());
fw.Dispose();
CodeAccessPermission::revertAssert();
if (changeType == 'Renamed') oldFileName = res.get_OldName();
cont += fileName;
cont += changeType;
cont += oldFileName;
return cont;
}
void waitFileSystemWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
container cResult;
str filename,changeType,oldFilename;
;
cResult=SelTestThreadDirPolling::SetStaticFileWatcher(_dirPath,_filenamePattern,_timeoutMs);
if (cResult)
{
[filename,changeType,oldFilename]=cResult;
info(strfmt("filename=%1, changeType=%2, oldFilename=%3",filename,changeType,oldFilename));
}
else
{
info("TIMED OUT");
}
}
void run()
{;
this.waitFileSystemWatcher(#'\\myserver\mydir','filepattern*.csv',10000);
}
I should acknowledge the following for forming the basis of my X++ implementation:
https://blogs.msdn.microsoft.com/floditt/2008/09/01/how-to-implement-filesystemwatcher-with-x/
I would guess DAXaholic's answer is correct, but you could try other solutions like EnumerateFiles.
In your case I would rather wait for the files rather than poll for the files.
Using FileSystemWatcher there will be a minimal delay from file creation till your process wakes up. It is more tricky to use, but avoiding polling is a good thing. I have never used it over a network.
Is there any possible way to check which query is so CPU intensive in _sqlsrv2 process?
Something which give me information about executed query in that process in that moment.
Is there any way to terminate that query without killing _sqlsrv2 process?
I cannot find any official materials in that subject.
Thank You for any help.
You could look into client database-request caching.
Code examples below assume you have ABL access to the environment. If not you will have to use SQL instead but it shouldn't be to hard to "translate" the code below
I haven't used this a lot myself but I wouldn't be surprised if it has some impact on performance.
You need to start caching in the active connection. This can be done in the connection itself or remotely via VST tables (as long as your remote session is connected to the same database) so you need to be able to identify your connections. This can be done via the process ID.
Generally how to enable the caching:
/* "_myconnection" is your current connection. You shouldn't do this */
FIND _myconnection NO-LOCK.
FIND _connect WHERE _connect-usr = _myconnection._MyConn-userid.
/* Start caching */
_connect._Connect-CachingType = 3.
DISPLAY _connect WITH FRAME x1 SIDE-LABELS WIDTH 100 1 COLUMN.
/* End caching */
_connect._Connect-CachingType = 0.
You need to identify your process first, via top or another program.
Then you can do something like:
/* Assuming pid 21966 */
FIND FIRST _connect NO-LOCK WHERE _Connect._Connect-Pid = 21966 NO-ERROR.
IF AVAILABLE _Connect THEN
DISPLAY _connect.
You could also look at the _Connect-Type. It should be 'SQLC' for SQL connections.
FOR EACH _Connect NO-LOCK WHERE _Connect._connect-type = "SQLC":
DISPLAY _connect._connect-type.
END.
Best of all would be to do this in a separate environment. If you can't at least try it in a test environment first.
Here's a good guide.
You can use a Select like this:
select
c."_Connect-type",
c."_Connect-PID" as 'PID',
c."_connect-ipaddress" as 'IP',
c."_Connect-CacheInfo"
from
pub."_connect" c
where
c."_Connect-CacheInfo" is not null
But first you need to enable connection cache, follow this example
I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.
This is for ASP.NET. I want to improve the time it takes run my function, today it takes around 20-30 seconds, more towards 30secs than 20secs though. That's running on one thread making 20 webrequests.
I'm thinking threads that do all the 20 webreqeusts, in order to quickly find the result or just go through the data (IE do all the 20 requests not finding anything).
Here's how it works.
1. I'm using html agility pack to fetch htmldocuments. 2. Then I parse them for information 3. Lastly I add that information to a dictionary OR I move on to the next webrequest until I reach 20 requests made.
I make at most 20 webRequests, at minimum 1. I have set the function to end when the info I'm searching for is found. Sometimes the info isn't there hence the 20 webrequests(it goes through all the data).
Every webrequest adds between 5-20 entries to the dictionary. This is then compared with the information I sent to it, if it's in the list I get the Key back, otherwise it returns 201. If found it gets added to the database.
QUESTIONS
*A:*If I want to do this with threads, how many should I create? 20 One for each request and let them all loose to do the job? Or should i create like 4 of them making at most 5 requests each?B: What if two threads are finished at the same time and wants to add info to the directory, can it lock the whole site(I'm using ASP.NET), or will it try to add one from thread A and then one result from Thread B? I have a check already today that checks if the key exists before adding it.
C:What would be the fastest way to this?
This is my code, depicting the loop which just shows that 20 requests are being made?
public void FetchAndParseAllPages()
{
int _maxSearchDepth = 200;
int _searchIncrement = 10;
PageFetcher fetcher = new PageFetcher();
for (int i = 0; i < _maxSearchDepth; i += _searchIncrement)
{
string keywordNsearch = _keyword + i;
ParseHtmldocuments(fetcher.GetWebpage(keywordNsearch));
if (GetPostion() != 201)
{ //ADD DATA TO DATABASE
InsertRankingData(DocParser.GetSearchResults(), _theSearchedKeyword);
return;
}
}
}
.NET allows only 2 requests open at the same time. If you want more than that, you need to configure it in web.config. Look here: http://msdn.microsoft.com/en-us/library/aa480507.aspx
You can the Parallel.For method which is very straightforward and handles the "how much threads" for you. Of course you can tweak it to set how much threads (or tasks) you want with ParallelOptions. Look here: http://msdn.microsoft.com/en-us/library/dd781401.aspx
For making a thread-safe dictionary you can use the ConcurrentDictionary. Look here: http://msdn.microsoft.com/en-us/library/dd287191.aspx