I am able to see every database in the .rdb file in the variable environment as a "promise" as per direction here. Now, I want to edit one of the file and save it. How can I do that? I am new in R.
In a discussion on r-pkg-devel, Ivan Krylov provided the following function ro read an RDB database:
# filename: the .rdb file
# offset, size: the pair of values from the .rdx
# type: 'gzip' if $compressed is TRUE, 'bzip2' for 2, 'xz' for 3
readRDB <- function(filename, offset, size, type = 'gzip') {
f <- file(filename, 'rb')
on.exit(close(f))
seek(f, offset + 4)
unserialize(memDecompress(readBin(f, 'raw', size - 4), type))
}
Therefore, you should be able to implement the reverse using a combination of serialize, memCompress, and writeBin.
Note that if the object changes size, you will also have to adjust the index file.
Related
I'm trying to retrieve raw physiological data (hand dynanometer) from Acqknowledge to be able to preprocess and analyze them in R (or Matlab) in an automatized way. Is there a way to do that ? I would like to avoid having to copy/paste the data manually from Acknowledge to Excel to read them in R.
Then I would like to apply a filter on the data and retrieve the squeezes of interest in R. Is there a way to do that ?
Any advice is very welcome, thank you in advance!
I had a similar problem. The key is the load_acq.m function, you can find it here -> https://github.com/munoztd0/fusy-octave-memories/blob/main/load_acq.m then you can just loop through them and save them as .csv or anything that you can load and work with in R.
As for how to do that I have put together a little routine to do that.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Create physiological files from AcqKnowledge
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% SETTING THE PATH
path = 'your path here'; %your path here
physiodir = fullfile(path, '/SOURCE/physio');
outdir = fullfile(path, '/DERIVATIVES/physio');
session = "first" %your session
base = [path 'sub-**_ses-' session '*'];
subj = dir(base); %check all matching subject for this session
addpath([path '/functions']) % load aknowledge functions
%check here https://github.com/munoztd0/fusy-octave-memories/blob/main/load_acq.m
for i=1:length(subj)
cd (physiodir) %go to the folder
subjO = char(subj(i).name);
subjX = subjO(end-3:end); %remove .mat extension
filename = [subjX '.acq']; %add .acq extension
disp (['****** PARTICIPANT: ' subjX ' **** session ' session ' ****' ]);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% OPEN FILE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
physio = load_acq(filename); %load and transform acknowledge file
data = physio.data; %or whatever the name is
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CREATE AND SAVE THE CHANNEL
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
cd (outdir) %go to the folder
% save the data as a file text in the participant directory
fid = fopen([subjX ',txt'],'wt');
for ii = 1:length(data)
fprintf(fid,'%g\t',data(ii));
fprintf(fid,'\n');
end
fclose(fid);
end
Hope it helps !
I have a function that will download an image collection as a TFrecord or a geotiff.
Heres the function -
def download_image_collection_to_drive(collection, aois, bands, limit, export_format):
if collection.size().lt(ee.Number(limit)):
bands = [band for band in bands if band not in ['SCL', 'QA60']]
for aoi in aois:
cluster = aoi.get('cluster').getInfo()
geom = aoi.bounds().getInfo()['geometry']['coordinates']
aoi_collection = collection.filterMetadata('cluster', 'equals', cluster)
for ts in range(1, 11):
print(ts)
ts_collection = aoi_collection.filterMetadata('interval', 'equals', ts)
if ts_collection.size().eq(ee.Number(1)):
image = ts_collection.first()
p_id = image.get("PRODUCT_ID").getInfo()
description = f'{cluster}_{ts}_{p_id}'
task_config = {
'fileFormat': export_format,
'image': image.select(bands),
'region': geom,
'description': description,
'scale': 10,
'folder': 'output'
}
if export_format == 'TFRecord':
task_config['formatOptions'] = {'patchDimensions': [256, 256], 'kernelSize': [3, 3]}
task = ee.batch.Export.image.toDrive(**task_config)
task.start()
else:
logger.warning(f'no image for interval {ts}')
else:
logger.warning(f'collection over {limit} aborting drive download')
It seems whenever it gets to the second aoi it fails, Im confused by this as if ts_collection.size().eq(ee.Number(1)) confirms there is an image there so it should manage to get product id from it.
line 24, in download_image_collection_to_drive
p_id = image.get("PRODUCT_ID").getInfo()
File "/lib/python3.7/site-packages/ee/computedobject.py", line 95, in getInfo
return data.computeValue(self)
File "/lib/python3.7/site-packages/ee/data.py", line 717, in computeValue
prettyPrint=False))['result']
File "/lib/python3.7/site-packages/ee/data.py", line 340, in _execute_cloud_call
raise _translate_cloud_exception(e)
ee.ee_exception.EEException: Element.get: Parameter 'object' is required.
am I falling foul of immutable server side objects somewhere?
This is a server-side value, problem, yes, but immutability doesn't have to do with it — your if statement isn't working as you intend.
ts_collection.size().eq(ee.Number(1)) is a server-side value — you've described a comparison that hasn't happened yet. That means that doing any local operation like a Python if statement cannot take the comparison outcome into account, and will just treat it as a true value.
Using getInfo would be a quick fix:
if ts_collection.size().eq(ee.Number(1)).getInfo():
but it would be more efficient to avoid using getInfo more than needed by fetching the entire collection's info just once, which includes the image info.
...
ts_collection_info = ts_collection.getInfo()
if ts_collection['features']: # Are there any images in the collection?
image = ts_collection.first()
image_info = ts_collection['features'][0] # client-side image info already downloaded
p_id = image_info['properties']['PRODUCT_ID'] # get ID from client-side info
...
This way, you only make two requests per ts: one to check for the match, and one to start the export.
Note that I haven't actually run this Python code, and there might be some small mistakes; if it gives you any trouble, print(ts_collection_info) and examine the structure you actually received to figure out how to interpret it.
Using the built-in "Import Data..." functionality we can import a properly formatted text file (like CSV and/or tab-delimited) as an image. It is rather straight forward to write a script to do so. However, my scripting approach is not efficient - which requires me to loop through each raw (use the "StreamReadTextLine" function) so it takes a while to get a 512x512 image imported.
Is there a better way or an "undocumented" script function that I can tap in?
DigitalMicrograph offers an import functionality via the File/Import Data... menu entry, which will give you this dialog:
The functionality evoked by this dialog can also be accessed by script commands, with the command
BasicImage ImageImportTextData( String img_name, ScriptObject stream, Number data_type_enum, ScriptObject img_size, Boolean lines_are_rows, Boolean size_by_counting )
As with the dialog, one has to pre-specify a few things.
The data type of the image.
This is a number. You can find out which number belongs to which image data type by, f.e., creating an image outputting its data type:
image img := Realimage( "", 4, 100 )
Result("\n" + img.ImageGetDataType() )
The file stream object
This object describes where the data is stored. The F1 help-documention explains how one creates a file-stream from an existing file, but essentially you need to specify a path to the file, then open the file for reading (which gives you a handle), and then using the fileHandle to create the stream object.
string path = "C:\\test.txt"
number fRef = OpenFileForReading( path )
object fStream = NewStreamFromFileReference( fRef, 1 )
The image size object
This is a specific script object you need to allocate. It wraps image size information. In case of auto-detecting the size from the text, you don't need to specify the actual size, but you still need the object.
object imgSizeObj = Alloc("ImageData_ImageDataSize")
imgSizeObj.SetNumDimensions(2) // Not needed for counting!
imgSizeObj.SetDimensionSize(0,10) // Not used for counting
imgSizeObj.SetDimensionSize(1,10) // Not used for counting
Boolean checks
Like with the checkboxes in the UI, you spefic two conditions:
Lines are Rows
Get Size By Counting
Note, that the "counting" flag is only used if "Lines are Rows" is also true. Same as with the dialog.
The following script improrts a text file with couting:
image ImportTextByCounting( string path, number DataType )
{
number fRef = OpenFileForReading( path )
object fStream = NewStreamFromFileReference( fRef, 1 )
number bLinesAreRows = 1
number bSizeByCount = 1
bSizeByCount *= bLinesAreRows // Only valid together!
object imgSizeObj = Alloc("ImageData_ImageDataSize")
image img := ImageImportTextData( "Imag Name ", fStream, DataType, imgSizeObj, bLinesAreRows, bSizeByCount )
return img
}
string path = "C:\\test.txt"
number kREAL4_DATA = 2
image img := ImportTextByCounting( path, kREAL4_DATA )
img.ShowImage()
I'm trying to write a quick script to open a family document, change the parameter group of 2 specified parameters, and then close and save the document. I've done multiple tests and I am able to change the parameter groups of the specified parameters, but the changes of the groups don't save back to the family file. When I open the newly saved family, the parameter groups revert back to their original group.
This is with Revit 2017.2.
The same script, when run in RPS in Revit 2018 will do as desired.
import clr
import os
clr.AddReference('RevitAPI')
clr.AddReference('RevitAPIUI')
from Autodesk.Revit.DB import *
from Autodesk.Revit.UI import UIApplication
from System.IO import Directory, SearchOption
searchstring = "*.rfa"
dir = r"C:\Users\dboghean\Desktop\vanity\2017"
docs = []
if Directory.Exists(dir):
files = Directory.GetFiles(dir, searchstring, SearchOption.AllDirectories)
for f in files:
name, extension = os.path.splitext(f)
name2, extension2 = os.path.splitext(name)
if extension2:
os.remove(f)
else:
docs.append(f)
else:
print("Directory does not exist")
doc = __revit__.ActiveUIDocument.Document
app = __revit__.Application
uiapp = UIApplication(app)
currentPath = doc.PathName
pgGroup = BuiltInParameterGroup.PG_GRAPHICS
for i in docs:
doc = app.OpenDocumentFile(i)
paramList = [i for i in doc.FamilyManager.Parameters]
t = Transaction(doc, "test")
t.Start()
for i in paramList:
if i.Definition.Name in ["Right Sidesplash Edge line", "Left Sidesplash Edge line"]:
i.Definition.ParameterGroup = pgGroup
t.Commit()
doc.Close(True)
Any ideas?
Thanks!
I can confirm that this happens in Revit 2017. Strange!
A simple way around it is to arbitrarily rename the parameter using doc.FamilyManager.RenameParameter, then rename it back to the original name.
So in your case this would be three additional lines of code after changing the Parameter group:
originalName = i.Definition.Name
doc.FamilyManager.RenameParameter(i, "temp")
doc.FamilyManager.RenameParameter(i, originalName)
Doesnt get to the root problem, but works around it
I often have to define many similar devices in sip.conf like this:
[device](!)
; setting some parameters
[device01](device)
callerid=dev01 <01>
[device02](device)
callerid=dev02 <02>
; ...
[deviceXX](device)
callerid=devXX <XX>
The question is perhaps I could avoid setting device-name specific parameters by using some variable like following?
[device](!)
callerid=dev${DEVICE_NAME:-2} <${DEVICE_NAME:-2}>
; setting some parameters
[device01](device)
[device02](device)
; ...
[deviceXX](device)
P.S.
It would be perfect, if there was some device constructor, so I could reduce the script to following, but, I think, that is not possible in Asterisk.
[device](!)
callerid=dev${DEVICE_NAME:-2} <${DEVICE_NAME:-2}>
; setting some parameters
;[device${MAGIC_LOOP(1,XX,leading_zeroes)}](device)
I've had good results writing a small program that takes care of it. It checks for a line saying something like
------- Automatically generated -------
and whatever is after that line, it's going to be regenerated as soon as it detects that there are new values for it (it could be from a database or from a text file). Then, I run it with supervisor and it checks every XX seconds if there are changes.
If there are changes, it issues a sip reload command after updating the sip.conf file
I wrote it in python, but whatever language you feel comfortable with should work just fine.
That's how I managed that and has been working fine so far (after a couple of months). I'd be extremely interested in learning about other approaches though. It's basically this (called from another script with supervisor):
users = get_users_logic()
#get the data that will me used on the sip.conf file
data_to_be_hashed = reduce(lambda x, y: x + y, map(lambda x: x['username'] + x['password'] + x['company_prefix'], users))
m = hashlib.md5()
m.update(str(data_to_be_hashed).encode("ascii"))
new_md5 = m.hexdigest()
last_md5 = None
try:
file = open(os.path.dirname(os.path.realpath(__file__)) + '/lastMd5.txt', 'r')
last_md5 = file.read().rstrip()
file.close()
except:
pass
# if it changed...
if new_md5 != last_md5:
#needs update
with open(settings['asterisk']['path_to_sip_conf'], 'r') as file:
sip_content = file.read().rstrip()
parts = sip_content.split(";-------------- BEYOND THIS POINT IT IS AUTO GENERATED --------------;")
sip_content = parts[0].rstrip()
sip_content += "\n\n;-------------- BEYOND THIS POINT IT IS AUTO GENERATED --------------;\n\n"
for user in users:
m = hashlib.md5()
m.update(("%s:sip.ellauri.it:%s" % (user['username'], user['password'])).encode("ascii"))
md5secret = m.hexdigest()
sip_content += "[%s]\ntype = friend\ncontext = %sLocal\nmd5secret = %s\nhost = dynamic\n\n" % (
user['username'], user['company_prefix'], md5secret)
#write the sip.conf file
f = open(settings['asterisk']['path_to_sip_conf'], 'w')
print(sip_content, file=f)
f.close()
subprocess.call('asterisk -x "sip reload"', shell=True)
#write the new md5
f = open(os.path.dirname(os.path.realpath(__file__)) + '/lastMd5.txt', 'w')
print(new_md5, file=f)
f.close()