Backup on NAS - The network is busy ERROR - networking

We have a backup system in our company, those backups go to 3 different locations. For some reason one location called "Hades" has an error sometimes.
I would be happy if anybody knows what could be the problem
Error while backuping (name of the program that we want to make a backup for) on Qnap Hades: System.IO.IOException: The network is busy. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.InternalCopy(String sourceFileName, String destFileName, Boolean overwrite, Boolean checkHost) at System.IO.File.Copy(String sourceFileName, String destFileName, Boolean overwrite) at ERPBackUp.Program.<>c__DisplayClass5_0.b__1()

As the error message states there is a network error and that can always happen.
The only option usually is to retry it again after some time.
Another reason might be that it's just the QNAP server which is not answering, i.e. because it's busy with some stuff. Then the problem might be that the server should get the running software reduced, down to the important things. I know QNAP offers a bunch of things to run on the servers and that can even be increased by some software packages. In this case it might be advisable to uninstall some things.
Also increasing the swap partition and increasing memory might help.

Related

Can a memory address tell you anything about how/where the object is stored

Is there a way you can identify whether an object is stored on the stack or heap solely from its memory address? I ask because it would be useful to know this when debugging and a memory address comes up in an error.
For instance:
If have a memory address: 0x7fd8507c6
Can I determine anything about the object based on this address?
You don't mention which OS you are using. I'll answer for Microsoft Windows as that's the one I've been using for the last 25 years. Most of what I knew about Unix/Linux I've forgotten.
If you just have the address and no other information - for 32 bit Windows you can tell if it's user space (lower 2GB) or kernel space (upper 2GB), but that's about it (assuming you don't have the /3GB boot option).
If you have the address and you can run some code you can use VirtualQuery() to get information about the address. If you get a non-zero return value you can use the data in the returned MEMORY_BUFFER_INFORMATION data.
The State, Type, and Protect values will tell you about the possible uses for the memory - whether it's memory mapped, a DLL (Type & MEM_IMAGE != 0), etc. You can't infer from this information if the memory is a thread's stack or if it's in a heap. You can however determine if the address is in memory that isn't heap or stack (memory in a DLL is not in a stack or heap, non-accessible memory isn't in a stack or a heap).
To determine where a thread stack is you could examine all pages in the application looking for a guard page at the end of a thread's stack. You can then infer the location of the stack space using the default stack size stored in the PE header (or if you can't do that, just use the default size of 1MB - few people change it) and the address you have (is it in the space you have inferred?).
To determine if the address is in a memory heap you'd need to enumerate the application heaps (GetProcessHeaps()) and then enumerate each heap (HeapWalk()) found checking the areas being managed by the heap. Not all Windows heaps can be enumerated.
To get any further than this you need to have tracked allocations/deallocations etc to each heap and have all that information stored for use.
You could also track when threads are created/destroyed/exit and calculate the thread addresses that way.
That's a broad brush answer informed by my experience creating a memory leak detection tool (which needs this information) and numerous other tools.

ORA-12571: TNS:packet writer failure with ASP.NET

My development team is experiencing numerous ORA-12571: TNS:packet writer failure errors using ASP.NET 3.5 and 4.0 against Oracle 11g. These errors are inconsistent as to when they occur, and are generated by numerous applications. This exception happens while calling random stored procedures, packets, and inline SQL statements. The Oracle 11 client is installed on the web server. Some applications use Microsoft System.Data.OracleClient to connect to Oracle, and some use the .NET components provided by oracle (ODP.NET). Both data access objects come up with the same error.
There are other non .NET applications that run on a different web server, but use the same database server. The apps do not have any such issues. My initial thinking is that there is something configured incorrectly on the web server with the Oracle client.
Has anyone else received this error? What did you do to fix it?
ORA-12571: TNS:packet writer failure
Stack Trace:
at System.Data.OracleClient.OracleConnection.CheckError(OciErrorHandle errorHandle, Int32 rc)
at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor& rowidDescriptor, ArrayList& resultParameterOrdinals)
at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, ArrayList& resultParameterOrdinals)
at System.Data.OracleClient.OracleCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.OracleClient.OracleCommand.ExecuteDbDataReader(CommandBehavior behavior)
at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable)
Another possible solution is that the firewall between you and the Oracle database thinks your connection is dead and closes it underneath you. You will only find out when you try to execute a query and get the ORA-12571 error.
This is caused by having connections open for a long time with no activity.
The solution is to add the SQLNET.EXPIRE_TIME to the sqlnet.ora file on the server and set it to some interval (10). This will cause the connections to be pinged every 10 minutes to ensure they are still alive.
The result of this is that your firewall will see network activity and not close the connection.
SQLNET.EXPIRE_TIME=10
ORA-12571: TNS:packet writer failure - One of the hardest problems I've had to resolve
I think this is a bug in Oracle. I came across many issues with the DBDataAdapter.Fill method where the Oracle Client would choke on a memory error. This was resolved for me by using the 11.2.0.2 client with patch 6 applied.
If you search Oracle's Support site you will see many issues like this.
Check also on "Read Protected Memory" issues with the 11g1 / 11g2 clients.
After I installed the elmah module and could analyze the exceptions, I tried to:
Change the connection configuration.
Remove and/or update the server firewall rules.
Update the Oracle client on the server machine.
Any of the options above resolved the problem, but I was forgetting the obsolete provider (System.Data.OracleClient) the we were using. After I replaced it with the last version of the ODP.NET (Oracle.DataAccess), everthing started to work flawlessly.
Obs: Based on your exception description you are currently using the obsolete provider.

NoSQL for Asp.Net my experience with NoRM and MongoDB

I develop in the last days a web page (http://www.srtbox.com/) for testing my architecture, more info here. With NoRM, MongoLab or MongoHQ for DB hosting. And I having a a lot of errors with NoRM. All with the Norm.BSON.BsonDeserializer class. I could fix one, but now im getting some errors in the connection. Error:
System.Net.Sockets.SocketException
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.IO.BufferedStream.Read(Byte[] array, Int32 offset, Int32 count)
at System.IO.BinaryReader.ReadBytes(Int32 count)
at Norm.BSON.BsonDeserializer.Deserialize[T](BinaryReader stream, Int32 length)
The truth is that this NoRM driver seems really unstable... The official driver does not offer Linq support and I did not find one example with a POCO object. Some one have a good experience with and NoSQL database and asp.net?? Scalable like MongoDB? RavenDB seems a nice option, but MongoDB have more success stories, of course without .Net .
So the big question is: Which NoSQL Scalable DB will you recommend for ASP.Net? Do you have any success story?
PD: I will be thankful if you visit my site: (http://www.srtbox.com/) for testing.
That looks like a network error, not a problem with the driver. If the database isn't local you will see that if you have a bad connection or didn't set up the firewall right - not much you can do differently in code to change it. Trying to keep a single connection open for too long can also cause connection errors.
Most of the NoSQL databases available work well with .NET so you can choose based on functional requirements rather than .NET compatibility. However, you shouldn't expect it to work just like SQL or have that many examples - most of them are used primarily on other platforms and since they all have different ways of running queries, LINQ isn't always a good fit anyway.
Also, what do you mean by POCO? The serialization attributes can make the classes look complicated, but they are just regular objects, not the lazy loaded self updating objects you get from a typical ORM.

What Causes "Internal connection fatal errors"

I've got a number of ASP.Net websites (.Net v3.5) running on a server with a SQL 2000 database backend. For several months, I've been receiving seemingly random InvalidOperationExceptions with the message "Internal connection fatal error". Sometimes there's a few days in between, while other times there are multiple errors per day.
The exception is not limited to one site in particular, though they share business and data access assemblies. The error seems to always be thrown from SqlClient.TdsParser.Run(). It sometimes is thrown from old-school direct SqlCommand.Execute() calls, while other times it is thrown from Linq2Sql code.
I've been assured by the network guys that there are no errors or packets lost on their end. Has anyone else experienced this? Could it be a driver problem? We have been unable as of yet to pinpoint a specific trigger for this exception.
We're running II6 on Windows Server 2003.
After a few months of ignoring this issue, it started to reach a critical mass as traffic gradually increased. Under heavy load, including some crawlers, things got crazy and these errors poured in nonstop.
Through trial and error, we eventually tracked down a handful of SqlCommand or LINQ queries whose SqlConnection wasn't closed immediately after use. Instead, through some sloppy programming originating from a misunderstanding of LINQ connections, the DataContext objects were disposed (and connections closed) only at the end of a request rather than immediately.
Once we refactored these methods to immediately close the connection with a C# "using" block (freeing up that pool for the next request), we received no more errors. While we still don't know the underlying reason that a connection pool would get so mixed up, we were able to cease all errors of this type. This problem was resolved in conjunction with another similar error I posted, found here: Why is my SqlCommand returning a string when it should be an int?
Sounds like the database connection is getting dropped or timing out.
We recently had similar issues moving to IIS 6 from IIS 5 connecting to SQL 2000. Our issue was solved by increasing number of ephemeral ports available.
Look at the usage of the ephemeral ports by the IIS server. The default max no. of ports available is normally 4000. You might want to consider increasing this if the sites on your server are particularly busy or your application is making a lot of database calls.
You can monitor these first to see if going over max limit.
Search Microsoft Knowledge base for "MaxUserPort" and "TcpTimedWaitDelay" and make necessary registry changes. Make sure you back up registry or snapshot server before making the changes. Will need to reboot for changes to take effect.
You should double check your database and recordset connection are being closed after use. Not closing will use up this port range unnecessarily.
Check the efficiency of your stored procedures anyway as they might be taking longer than they need too.
"If you rapidly open and close 4000 sockets in less than four minutes, you will reach the default maximum setting for client anonymous ports, and new socket connection attempts fail until the existing set of TIME_WAIT sockets times out." - from http://support.microsoft.com/kb/328476
Check your server's LOG folder (\program files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG or similar) for files named SqlDump*.mdmp and SqlDump*.txt. If you do find any you'll have to take it to Product Support.
I was creating a new EF Core project and was trying to create the database to an external Linux server instead of a Windows Server or local one. After hours of searching I found out that I am using MySQL instead of the Microsoft SQL server.
I found it weird that everyone was using 1433 instead of the usual 3306. So to fix my 'Internal connection fatal error' I had to set up a docker instance of SQL Server bound to its default port of 1433.
It literally was that simple. In the docker repo look for "microsoft-mssql-server" and run the image as described neatly in the description below. Everything works now and I am able to push my database from my EF Core project to an external server.

Unexpected data found error during BizTalk Simultaneous Receive

I have a receive port with two FILE receive locations polling the same network share. The only difference between the receive locations is that they use a different file mask. They both use a custom pipeline with single Flat file disassembler component. I have a send port subscribing to the receive port. (this is just the minimal setup where I can reproduce the problem).
When processing a group of files (up to 1mb in size) occasionally the pipeline throws an error. This only occurs when more than one file is copied to the receive location file share at once and occurs irregularly. The error generally reads:
An error occurred when parsing the incoming document: "Unexpected data
found while looking for: '\r\n' The current definition being parsed is
GIRMFile. The stream offset where the error occured is 491540. The
line number where the error occured is 2446. The column where the
error occured is 199.".
Examining the suspended message at the line number shown, consistently 512 bytes of data is different from the incoming message. This 512 bytes of data always matches data from one of the other input files consumed at the same time! Or in a few rare cases the incorrect 512 bytes of data is data from a file consumed at the same time but after it had been processed by the pipeline (i.e. the suspended flat file has a 512 byte chunk of xml!). The 512 bytes is never in a consistent location within the suspended messages.
Thinking the BizTalk databases were corrupted in some way, I deleted them and re-configured. The problem returned after a few hundred files were processed successfully.
This only occurs on our test box (a VMWare vm) so I suspect the machine is the problem in some way. But it seems odd that the machine isn't reporting other errors in other processes.
Interesting - I recall seeing similar things in BizTalk 2004 but haven't seen anything like that with BT2006.
It sounds like the pipeline may be running into threading issues - perhaps due to receiving the files from the same file location.
Have you tried any of the advanced file receive location properties?
I'm thinking in particular the 'Rename files while reading' checkbox. Perhaps if the issue is with non-threadsafe stream reads, this process of creating a renamed file (which I think just uses standard IO libraries) will allow BizTalk to get a clean stream.
Only guessing though - please report back if you find a solution!
This only occurs on our test box (a VMWare vm)
If you do not succeed in reproducing this on another machine with the same configuration, I'd mark this off as a non-issue, or external. Agree with the aforementioned that concurrency problems are highly unlikely
I have to say I find this very strange, I would find it very hard to believe that 5 years into BizTalk (counting from 2004 :-)), the FILE adapter and the standard disassemblers have threading issues.,
Are the files coming into the pick up location over the network? what file masks are you using? is there a chance that one of the receive location is picking up the files before their transfer is complete?
You said the receive location network is a network share - perhaps it's a network problem? Can you reproduce this on local drive?
A few more thoughts... is the share a DFS share? Can you put the receive locations on different hosts and see what happens then?
We have similar issues with programs running on VMWare VMs accessing shares. For some reason files will appear to be corrupt.
This was not BizTalk related, it was happening with an in house developed application.
Rebooting the VM fixes our issue for a while. In our case we were able to reconfigured our process to not use shares. We never did persue finding a solution to the real problem.

Resources