Memory allocation of 4GB ignored in Firebase Cloud Function - firebase

I would like to update the memory allocation limit of a Firebase Cloud Function to 4GB. I updated the memory value with the runWith parameter:
functions.runWith({memory: "4GB"})
This, however, is being ignored and the deployed function does not have 4 GB of memory allocated.
If I try with 3GB, i get this error:
Error: The only valid memory allocation values are: 128MB, 256MB, 512MB, 1GB, 2GB, 4GB
So it seems 4 GB is a valid value.
What am I doing wrong? Am I missing something?
It seems to work just fine if I use 2GB, 1GB... It only ignores the 4 GB value.

I had the exact same issue and reached out to Firebase Support to report the bug. According to Firebase Support, they acknowledge it's a bug and will release a fix in their next release.
In the time of reading, if the issue is not fixed this is the workaround:
Go the GCP console and select your project
select Cloud Functions from the menu on the left
Click "edit"
Expand the section "VARIABLES, NETWORKING AND ADVANCED SETTINGS"
Change the "memory allocated" field.
Click "Next" and then click on "Deploy"
EDIT: The fix has been released in firebase-functions#3.13.1

As of 2022 July
memory: amount of memory to allocate to the function, possible values are: '128MB', '256MB', '512MB', '1GB', '2GB', '4GB', and '8GB'.
functions.runWith({
memory: '4GB',
}).firestore

Related

Azure Cosmos DB Emulator slow (100 ms / request)

I am trying to set up the Azure Cosmos DB Emulator to work locally with integration tests but I found that it is very slow.
I am reading a ~1KB JSON document with the container.ReadItemAsync<T> method, and awaiting the answer. I am calling this method in a loop, for 100 times.
The execution time is consistently around 9.5-10 seconds, so one request takes around 100 milliseconds which is very slow compared to the fact that this service is running locally.
Why is this so slow and how can I make it faster?
I expect at most 1 ms / request considering it is all disk I/O.
I tried the following but they didn't work:
Turning Rate Limiting on/off
creating the database/collection with various provisioning settings, it has zero effect on performance (even 100k RU)
creating the db and collection manually vs with the client SDK
"Reset Data" menu in the emulator tray menu
Further information:
The emulator version is 2.14.6.0 (68d4ca59)
I start the emulator from the start menu, but starting it from the command line doesn't change anything
I am using the Microsoft.Azure.Cosmos nuget package, version 3.22.1
my CPU is i7-8565U, but it isn't even fully used while the test is running
my system has 16 GB RAM
my system is running on a fast enough SSD: "NVMe SK hynix BC501 H", but while running the test the SSD usage is between 0 and 2%.
the performance is the same if I increase the document size to 100 KB or even 1 MB.
Creating your CosmosClientOptions with the AllowBulkExecution = true setting can cause this.
the SDK will construct batches and group operations, when the batch is full, it will get dispatched, but if the batch doesn’t fill up, there is a timer that will dispatch it to make sure they complete. This timer currently is 100 milliseconds. So if the batch does not get filled up (for example, you are just sending 50 concurrent operations), then the overall latency might be affected.
Source: Introducing Bulk support in the .NET SDK

Cosmos DB Emulator hangs when pumping continuation token, segmented query

I have just added a new feature to an app I'm building. It uses the same working Cosmos/Table storage code that other features use to query and pump results segments from the Cosmos DB Emulator via the Tables API.
The emulator is running with:
/EnableTableEndpoint /PartitionCount=50
This is because I read that the emulator defaults to 5 unlimited containers and/or 25 limited and since this is a Tables API app, the table containers are created as unlimited.
The table being queried is the 6th to be created and contains just 1 document.
It either takes around 30 seconds to run a simple query and "trips" my Too Many Requests error handling/retry in the process, or hangs seemingly forever and no results are returned, the emulator has to be shut down.
My understanding is that with 50 partitions I can make 10 unlimited tables, collections since each is "worth" 5. See documentation.
I have tried with rate limiting on and off, and jacked the RU/s to 10,000 on the table. It always fails to query this one table. The data, including the files on disk, has been cleared many times.
It seems like a bug in the emulator. Note that the "Sorry..." error that I would expect to see upon creation of the 6th unlimited table, as per the docs, is never encountered.
After switching to a real Cosmos DB instance on Azure, this is looking like a problem with my dodgy code.
Confirmed: my dodgy code.
Stand down everyone. As you were.

Ucanaccess JDBC Driver - outofmemory error with memory=false setting

I am using jackcess-2.1.1. I have set the memory=false parameter, but still face the outOfmemory Error. It happens while processing a MDB file of 1.8GB size.
The JVM memory arguments are set to 1GB max size. If I change the Max size to 2GB, it works with no issues.
But as per the instruction on the ucanaccess portal, when memory=false is set, then In-Memory is not supposed to be used and the JVM Memory Args should not change anything.
Any response is greatly appreciated. Find the error below.
java.lang.OutOfMemoryError: Java heap space
at com.healthmarketscience.jackcess.impl.LongValueColumnImpl.readLongValue(LongValueColumnImpl.java:136)
at com.healthmarketscience.jackcess.impl.LongValueColumnImpl.read(LongValueColumnImpl.java:90)
at com.healthmarketscience.jackcess.impl.ColumnImpl.read(ColumnImpl.java:586)
at com.healthmarketscience.jackcess.impl.TableImpl.getRowColumn(TableImpl.java:767)
at com.healthmarketscience.jackcess.impl.TableImpl.getRow(TableImpl.java:673)
at com.healthmarketscience.jackcess.impl.TableImpl.getRow(TableImpl.java:652)
at com.healthmarketscience.jackcess.impl.CursorImpl.getCurrentRow(CursorImpl.java:699)
at com.healthmarketscience.jackcess.impl.CursorImpl$BaseIterator.next(CursorImpl.java:822)
at com.healthmarketscience.jackcess.impl.CursorImpl$BaseIterator.next(CursorImpl.java:1)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTableData(LoadJet.java:829)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTablesData(LoadJet.java:997)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTables(LoadJet.java:1041)
at net.ucanaccess.converters.LoadJet$TablesLoader.access$2900(LoadJet.java:273)
at net.ucanaccess.converters.LoadJet.loadDB(LoadJet.java:1479)
at net.ucanaccess.jdbc.UcanaccessDriver.connect(UcanaccessDriver.java:243)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:153)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:144)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnectionFromDriver(AbstractDriverBasedDataSource.java:155)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnection(AbstractDriverBasedDataSource.java:120)
at org.hibernate.service.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:141)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcServicesImpl.java:242)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:117)
at org.hibernate.service.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:76)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:160)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:132)
at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1825)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1783)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1868)
at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:372)
If I change the Max size to 2GB, it works with no issues. But as per the instruction on the ucanaccess portal, when memory=false is set, then In-Memory is not supposed to be used and the JVM Memory Args should not change anything.
That's not quite true. memory=false tells UCanAccess not to hold the HSQLDB backing database tables in memory, but a disk-based HSQLDB database will still consume some memory and there are lots of other things that UCanAccess (and Jackcess) must keep in memory too. The memory requirements with memory=false will just be considerably lower than with memory=true.

Just started getting AIR SQLite Error 3182 Disk I/O error occurred

We have a new beta version of our software with some changes, but not around our database layer.
We've just started getting Error 3128 reported in our server logs. It seems that once it happens, it happens for as long as the app is open. The part of the code where it is most apparent is where we log data every second via SQLite. We've generated 47k errors on our server this month alone.
3128 Disk I/O error occurred. Indicates that an operation could not be completed because of a disk I/O error. This can happen if the runtime is attempting to delete a temporary file and another program (such as a virus protection application) is holding a lock on the file. This can also happen if the runtime is attempting to write data to a file and the data can't be written.
I don't know what could be causing this error. Maybe an anti-virus program? Maybe our app is getting confused and writing data on top of each other? We're using async connections.
It's causing lots of issues and we're at a loss. It has happened in our older version, but maybe 100 times in a month rather than 47,000 times. Either way I'd like to make it happen "0" times.
Possible solution: Exception Message: Some kind of disk I/O error occurred
Summary: There is probably not a problem with the database but a problem creating (or deleting) the temporary file once the database is opened. AIR may have permissions to the database, but not to create or delete files in the directory.
One answer that has worked for me is to use the PRAGMA statement to set the journal_mode value to something other than DELETE. You do this by issuing a PRAGMA statement in the same way you would issue a query statement.
PRAGMA journal_mode = OFF
Unfortunately, if the application crashes in the middle of a transaction when the OFF journaling mode is set, then the database file will very likely go corrupt.1.
1 http://www.sqlite.org/pragma.html#pragma_journal_mode
The solution was to make sure database delete, update, insert only happened one at at time by wrapping a little wrapper. On top of that, we had to watch for error 3128 and retry. I think this is because we have a trigger running that could lock the database after we inserted data.

Performance counters while load testing

below some of the counter values while doing load testing using 250 users:
> Gen2 heap size : 1124196
> #bytes in all heaps : 2172104
> #GC Handles: 926
> # of pinned objects: 11 Large Object Heap size: 87128
> # total commited bytes: 3350528
> # total reserved bytes: 33546240
they were increasing, increasing till they reach that limit.
after the test finished, the memory shown in task manager for w3wp.exe is not releasing until an IIS Reset is applied.
also the application is not accessible till an IIS Reset is applied (getting com+ activation failed)
Anyone, had benn in that situation before?
Thanks
Yes, you need a tool like ANTS Memory Profiler. Aside from that, make sure you are closing MemoryStreams, anything IO related, SqlConnections...etc. Try to us using statements on anything that implements IDisposable. Check for static references to objects tied to your Page instances.

Resources