For various reasons related to network bandwidth and performance we have an application with large Unicode strings converted to byte arrays compressed with the GZipStream .Net framework (.Net core 3.1) functionality.
We want to store these in a SQL Server 2017 varbinary(max) column for later retrieval and decompression - this works fine using the same GZipStream library to decompress.
We would also like to take advantage of the T-SQL Decompress function to be able to query the data on the DB server (without having to bring it all back into .Net and decompress it there)
Whilst Decompress does seem to work (as in it doesn't throw an error and it produces a binary output that can be cast to nvarchar(max)), the resulting nvarchar is totally different from the original source - it actually crashes SSMS when displayed!
This is not a problem if we pass the decompressed string into SQL Server and compress it there using the Compress function, but we do not want to do this because it requires additional decompress step and extra bandwidth consumption.
I have ensured we're on CU20 of SQL Server 2017, so I don't think it's a patching problem.
I have tried the using the different compression ratio options in the .Net library but they all produce the same problem.
It would appear that despite both being GZip compression that the T-SQL and .Net compression algorithms are not compatible, but if anyone has succeeded in combining the two, I would appreciate hearing how.
Arrgh... I was being an idiot! My source strings (they were XML) were in UTF-8 so this worked:
using (var compressStream = new MemoryStream())
using (var compressor = new GZipStream(compressStream, CompressionMode.Compress))
{
compressor.Write(Encoding.UTF8.GetBytes(largeString));
compressor.Close();
var bytesToWriteToSQLVarbinaryMax = compressStream.ToArray();
}
And then I could do:
SELECT Cast(Decompress(bytes) AS varchar(max)) FROM compressedTable
Related
I have an in-memory TinkerGraph. I want to write a Spring Boot REST controller to expose a serialized (as GraphML) representation of this Tinkergraph. The serialization APIs (g.io) needs a String filepath to be passed to it. Currently I am having to write to a /tmp file and then read the file to get a String representation of the serialized GraphML.
Is there a way to directly get a String output of the serialized GraphML? Without having to write into a tmp file and read it back in?
g.io("graph.xml").write().iterate()
As of the current latest release 3.4.2, I'm afraid that there is no way to do it with the Gremlin language. The reason it only writes to a file and not to something like an Java OutputStream is that the io() step is meant to be programming language agnostic. Python and other languages off the JVM have no way to construct or specify such an object so writing to file makes it work across the board. I don't know if that will change in the future, unless we came up with a reasonable API that would work intuitively across programming languages.
Since you are using an in-memory TinkerGraph you could bypass Gremlin and got back to the very old way of doing things:
Graph graph = TinkerFactory.createModern();
try (OutputStream os = new FileOutputStream("tinkerpop-modern.xml")) {
graph.io(IoCore.graphml()).writer().normalize(true).create().writeGraph(os, graph);
}
You would just replace the FileOutputStream with whatever kind of OutputStream you wanted to use. This approach uses the old Graph API which I think is just deprecated in newer versions, so the option should still be available to you. Note that if you are not on the JVM the only way to do return a String would be by submitting a Gremlin script to Gremlin Server.
Does Mono.Data.SQLite support in-memory databases using Shared Cache Mode? I'm getting an ArgumentException complaining about the format of the data source URI string. The connection string I'm attempting to use is:
URI=file::memory:?cache=shared;Version=3
I've also tried with the Data Source connection string property, with similar results.
(For reference, this is in the context of an iPhone MonoTouch application.)
We have our application developed and tested with sql server 2008r2 using ASP.NET on windows server. Now we have a requirement to move the database from windows to oracle on red hat linux.
We haven't yet setup the infrastructure to test the same. I would like to know in the meantime if anyone has successfully done this kind of thing. Pointers to any resources will be a great advantage.
Is changing the connection string the only thing that needs to be done or are there any specific configuration in Linux to allow this?
I will verify this once I get the environment ready, but as a headstart if anyone has any similar experience, do share.
Thanks in advance.
P.S: For migration of table structure, storedprocedures etc to oracle we will be using the Sql Developer tool.
I would like to answer my question,because, migration to oracle is not that straight forward, but there are some tips that may help anyone migrate to oracle on windows or linux with less headache.
The first thing the Sql developer tool does a good job of migrating sqlserver schema and data to oracle including storedprocedures, constraints, triggers etc.
It also does a good job of datatype mapping and provides option to remap datatype if required.
Some caveats and precautions.
Oracle has a limitation on the length of stored procedure names of about 30 characters. This is the area you need to resort to some manual renaming as when migration SP's or identifiers whose name is greater than 30 characters may get truncated.
The other common issue that you may face is respect to date insertion and formatting. You can use the following snippet to avoid the headache. The common error will be "Not a valid month."
OracleConnection conn = new OracleConnection(oradb); // C#
conn.Open();
OracleGlobalization session = conn.GetSessionInfo();
session.DateFormat = "DD.MM.RR"; // change the format as required here
conn.SetSessionInfo(session);
The most annoying error would be well character to numeric conversion when inserting or updating data or related error.
The issue here is when you add parameters to command object for sql provider, the binding happens by name, but forOracle.DataAccess the default binding is by position. Here's the post that saved me lot of headache.
ODP .NET Parameter problem with uint datatype
What you can do is set the command.BindByName = true;
When migrating SP's that returns data, oracle creates an out parameter ref cursor. This needs to be taken care of while constructing command parameters.
For e.g.
OracleParameter refp = new Oracle.DataAccess.Client.OracleParameter("cv_1", OracleDbType.RefCursor, ParameterDirection.InputOutput);
command.Parameters.Add(refp);
Also the sqlserver requires parameters to SP be prefixed with "#" and oracle doesn't. This can be easily taken care of in your data layer.
Also since there is no bit datatype in Oracle, number(1) works fine. You may need to convert your bool to numeric, if required.
Hope this helps someone avoid a migration headaches. I will post more issues if I encounter.
Backend: SQL Server 2008 database with FileStream enabled
Data Access: Linq to Entities
I have thousands of pdf's that currently reside on a file server. I would like to move these pdf's off of the file server and into a SQL Server 2008 database so that I can manage them easier.
As a proof of concept (ie - to ensure that the new FileStream ability in SQL Server 2008 is what I'm looking for), I wrote a small app that would read and write these pdf's to the FileStream enabled database via the entities framework.
The app is very simple; here's the code:
datReport report = new datReport();
report.ReportName = "ANL-7411-Rev-Supp-1.pdf";
report.RowGuid = Guid.NewGuid();
// The following line blows up on really big pdf's (350+ mb's)
report.ReportData = File.ReadAllBytes(#"C:\TestSavePDF\ANL-7411-Rev-Supp-1.pdf");
using (NewNNAFTAEntities ctx = new NewNNAFTAEntities()) {
ctx.AddTodatReport(report);
ctx.SaveChanges();
}
I have the line of code commented above where the error occurs. The exact error is 'System.outofmemoryexception', which leaves me with little doubt that the file size is what is causing the problem. The above code does work on smaller pdf's. I don't know where the exact limit is as far as file size, but my biggest pdf's are over 350 megabytes and they get the error.
Any help would be greatly appreciated. Thanks!
You're not using the streaming in FILESTREAM very much in your example....
Check out the MSDN docs on FILESTREAM in ADO.NET and this article that both show how to use the SqlFileStream as a stream from C# - that should work a lot better (I would believe) than sucking the whole PDF into memory....
Marc
We commonly use MS Visual Foxpro v9.0 SP1, the language, tables, and reports. However, sometimes we use an ODBC driver to connect to the tables. The ODBC driver was written for Foxpro v6, and doesn't support certain nested selects, autoincrement fields, or embedded casts.
We would like to find an alternative to what we have. It could be another ODBC driver that works with Visaul Foxpro v9, or a complete alternative to ODBC. Is there such a thing?
Thanks.
(Talk about reuse, just answered this in another thread today)
If you are looking for an ODBC driver for VFP databases and tables you might consider looking at Advantage Database from iAnywhere. The have a local engine and a server engine. The local engine has the engine to access DBF data, but for your case, it also has an ODBC drive that works with VFP data up to and including the current Visual FoxPro 9. The local engine and the included ODBC driver are free.
http://www.sybase.com/ianywhere
You can via COM+ and do almost anything in VFP, however, you have security issues through Admin Tools, Component Services..
You can either build as a single-threaded or multi-threaded DLL.
Once registered, and the typelibrary info is "Add Referenced" to a C# (or other) app, you can make function calls with whatever parameters you need. There are many things you can return back, but typically tables, I send back as XML (via Foxpro's XMLAdapter class), then stream convert to a table once in C#. Its been a while since I worked it that way, but that give tremendous flexibility as you can do your queries, scan loops, and other complex conditional testing and updating of the cursor before generating out the XML and returning it as a string.
DEFINE CLASS YourClass as CUSTOM OLEPUBLIC
FUNCTION GetMyData( lcSomeString as String)
select * from (YourPath + "SomeTable" ) where ... into cursor C_SomeCursor readwrite
.. any other manipulation, testing, etc...
oXML = CREATEOBJECT( "xmladapter" )
lcXML = ""
oXML.AddTableSchema( "C_SomeCursor" )
oXML.ToXML( "lcXML", "", .f. )
return lcXML
ENDFUNC
ENDDEFINE