I'm getting the following error:
net.rim.device.api.database.DatabaseOutOfMemoryException
When trying to update my DB. It seems to happen at random points in the code (of course these are always points where I interact with the DB).
What does this error mean and how can I prevent this from happening??
By the way, I use transactions when using INSERT and UPDATE to speed things up.
Any help at all would be greatly appreciated!
Thanks!!
Related
I am working on a project using typeorm and nestjs.
When a table is first created, I want to execute the sql statement and put the data in it.
I searched for a way to put the initial data, but couldn't find any results.
I need your help.
thank you :)
Hope you have a great day and help me with the problem
I am trying to recreate AP aging through ODBC. Everything is working fine except the Journal transactions.
In Netsuite Saved searches there is a field remainingamount which is not available in connection schema for some reason. We have tried to conctact Netsuite directly but still got no any feedback from them.
There is a field foreignamountunpaid/foreignamountunused in transactionline table I am trying to use right now. And with bills and Expense reports it's working totaly fine.
However, for no reason we have some problems with some of JEs. In some of them there is null value when in the Saved search there is a value for that line.
I tried to analyse why this is happening but it's look totaly random.
So, do you by any chance know if there is a better field for amount remaining I could use throug ODBC connection? Or why some of JEs have null values in those fields foreignamountunpaid/foreignamountunused ?
Thank you in advance.
Found the way to make it work
nvl(nvl(transactionline.foreignamountunpaid, -transactionline.foreignpaymentamountunused),-transactionline.foreignamount)
this column in sql gives you desired result
We are using Oracle Warehouse Builder in our project.Accidentally,some of the internal tables
data got deleted.The impact is when i open the map in OWB,the canvas in completely blank.I cannot see the tables and transformations applied.However when I right click on the map and execute,it does that perfectly fine.But the code is not visible and neither can I deploy that map.The table whose data deletion caused this was CMPSCOMAPCLASSES.We do not have a regular backup of the databse.Hence cannot recover the data.
Can anybody please help me in getting back the data anyhow.
Appreciate your help.
I need a lucid explanation of why XDMP-EXTIME happens in Marklogic. In my case it's happening during a search(read operation). In the exception message a line from the code is being printed:
XDMP-EXTIME: wsssearch:options($request, $req-config) -- Time limit exceeded
This gives me the impression that the execution does not go beyond that line. But it seems that it's a pretty harmless line of code ,it does not fetch any data from the DB, just sets certain search options. How can I pin point which part of the code is causing this? I have heard that increasing the max time limit of the task server solves such problems but that's not an option with me. Please let me know how such problems are tackled. It would be very very hard for me to show you the code base.Still hoping to hear something helpful from you guys.
The error message can sometimes put you on the wrong foot because of lazy evaluation. The execution can actually be further down the road than the error message seems to indicate. Could be one line, could be several. Look for where the returned value is being used.
Profiling can sometimes help getting a clearer picture of where most time is spent, but the lazy evaluation can throw things off here as well.
The bottom-line meaning of the message is pretty simple: the execution of your code takes too long. The actual search in which the options are being used is the most likely candidate of where it goes wrong.
If you are using cts:search or search:search under the covers, then that should normally perform well. A search typically gets slow when you end up returning many results, e.g. don't apply pagination. Search:search does that by default however.
A search can also get slow if you are running your search in update mode. You could potentially end up having MarkLogic trying to apply many (unnecessary) read locks. Put the following declaration in your search endpoint code, or xquery main module that does the search:
declare option xdmp:update "false";
HTH!
You could try profiling the code to see what specifically is taking so long. This might require increasing the session time limit temporarily to prevent the timeout from occurring while profiling. Note that unless this is being executed on the Task Server via xdmp:spawn or xdmp:spawn-fucntion, you would need to increase the value on the App Server hosting the script.
If your code is in a module, the easiest thing to do is make a call to the function that times out from Query Console using the Profile tab. Alternatively, you could begin the function with prof:enable(xdmp:request()) and later output the contents of prof:report(xdmp:request()) to a file on the filesystem, or insert it somewhere in the database.
I just started to use KNIME and it suppose managed a huge mount of data, but isn't, it's slow and often not response. I'll manage more data than that I'm using now, What am I doing wrong?.
I set in my configuration file "knime.ini":
-XX:MaxPermSize=1024m
-Xmx2048m
I also read data from a database node (millions of rows) but I can't limit it by SQL (I don't really mind, I need this data).
SELECT * FROM foo LIMIT 1000
error:
WARN Database Reader com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'LIMIT 0' at line 1
I had the same issue... and was able to solve it really simply, KNIME has a KNIME.ini file, this one is like the paramethers KNIME uses to execute...
The real issue is that JBDC driver is set for 10 Fetch Size. By default, when Oracle JDBC runs a query, it retrieves a result set of 10 rows at a time from the database cursor. This is the default Oracle row fetch size value... so whenever you are reading database you will have a big pain waiting to retrieve all the lines.
The fix is simply, go to the folder where KNIME is installed, look for the file KNIME.ini, open it and then add the following sentences to the bottom, it will override the defauld JBDC fetching, and then you will get the data in literally seconds.
-Dknime.database.fetchsize=50000
-Dknime.url.timeout=9000
Hope this helps :slight_smile:
see http://tech.knime.org/forum/knime-users/knime-performance-reading-from-a-database for the rest of this discussion and solutions...
I'm not sure if your question is about the performance problem or the SQL problem.
For the former, I had the same issue and only found a solution when I started searching for Eclipse performance fixes rather than KNIME performance fixes. It's true that increasing the Java heap size is a good thing to do, but my performance problem (and perhaps yours) was caused by something bad going on in the saved workspace metadata. Solution: Delete contents of the knime/workspace/.metadata directory.
As for the latter, not sure why you're getting that error; maybe try adding a semicolon at the end of the SQL statement.