Restore vgg16 network in tensorflow - r

This one has been giving me a headache for quite some time now, even though it seems to be very basic.
I have the vgg16 network downloaded as a .cpkt
(from https://github.com/tensorflow/models/blob/master/slim/README.md#Pretrained)
Now what I want to do is loading for example the tensor of the first convolution layer of this network as an array in R.
I tried
restorer = tf$train$Saver()
sess = tf$Session()
restorer$restore(sess, "/home/beheerder/R/vgg_16.ckpt")
But then I do not see any variables apearing in my enviroment.
I'm working in R, but an awnser in Python is OK as well, as I can probably translate it to R.

Saver takes the variables to restore in constructor. In other words, you have to create the variables before you can restore them. Here is the example from Saver's doc:
v1 = tf.Variable(..., name='v1')
v2 = tf.Variable(..., name='v2')
# Pass the variables as a dict:
saver = tf.train.Saver({'v1': v1, 'v2': v2})
# Or pass them as a list.
saver = tf.train.Saver([v1, v2])
If you were to run the first line of your code in python you would get:
In [1]: import tensorflow as tf
In [2]: saver = tf.train.Saver()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-18da33d742f9> in <module>()
----> 1 saver = tf.train.Saver()
/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.pyc in __init__(self, var_list, reshape, sharded, max_to_keep, keep_checkpoint_every_n_hours, name, restore_sequentially, saver_def, builder, defer_build, allow_empty, write_version, pad_step_number)
1054 self._pad_step_number = pad_step_number
1055 if not defer_build:
-> 1056 self.build()
1057 if self.saver_def:
1058 self._check_saver_def()
/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.pyc in build(self)
1075 return
1076 else:
-> 1077 raise ValueError("No variables to save")
1078 self._is_empty = False
1079 self.saver_def = self._builder.build(
ValueError: No variables to save
You can see how model variables are created before being restored in the 20 lines starting from https://github.com/tensorflow/models/blob/master/slim/train_image_classifier.py#L338
This code gets executed if you make a call to train_image_classifier.py similar to the flower example in https://github.com/tensorflow/models/blob/master/slim/README.md#fine-tuning-a-model-from-an-existing-checkpoint

Related

Writing netcdf after running xarray.dataset.reindex to fill gaps in a time series fails due to memory allocation error

Problem Summary
I am attempting to convert a.grib2 file representing a single day's worth of gridded radar rainfall data spanning the continental US, into a netcdf. When a .grib2 is missing timesteps, I am attempting to fill them in with NA values using xarray.Dataset.reindex before running xarray.Dataset.to_netcdf. However, after I've reindexed the dataset, the script fails due to a memory allocation error. It succeeds if I don't reindex. One clue could be in the fact that the dataset chunks are set to (70, 3500, 7000), but when ds.to_netcdf is called, the script fails because it's attempting to load a chunk with dimensions (210, 3500, 7000).
Accessing Full Reproducible Example
The code and data to reproduce my results can be downloaded from this Dropbox link. The code is also shown below followed by the outputs. Potentially relevant OS and environment information are shown below as well.
Code
#%% Import libraries
import time
start_time = time.time()
import xarray as xr
import cfgrib
from glob import glob
import pandas as pd
import dask
dask.config.set(**{'array.slicing.split_large_chunks': False}) # to silence warnings of loading large slice into memory
dask.config.set(scheduler='synchronous') # this forces single threaded computations (netcdfs can only be written serially)
#%% parameters
chnk_sz = "7000MB"
fl_out_nc = "out_netcdfs/20010101.nc"
fldr_in_grib = "in_gribs/20010101.grib2"
#%% loading and exporting dataset
ds = xr.open_dataset(fldr_in_grib, engine="cfgrib", chunks={"time":chnk_sz},
backend_kwargs={'indexpath': ''})
# reindex
start_date = pd.to_datetime('2001-01-01')
tstep = pd.Timedelta('0 days 00:05:00')
new_index = pd.date_range(start=start_date, end=start_date + pd.Timedelta(1, "day"),\
freq=tstep, inclusive='left')
ds = ds.reindex(indexers={"time":new_index})
ds = ds.unify_chunks()
ds = ds.chunk(chunks={'time':chnk_sz})
print("######## INSPECTING DATASET PRIOR TO WRITING TO NETCDF ########")
print(ds)
print(' ')
print("######## ERROR MESSAGE ########")
ds.to_netcdf(fl_out_nc, encoding= {"unknown":{"zlib":True}})
Outputs
######## INSPECTING DATASET PRIOR TO WRITING TO NETCDF ########
<xarray.Dataset>
Dimensions: (time: 288, latitude: 3500, longitude: 7000)
Coordinates:
* time (time) datetime64[ns] 2001-01-01 ... 2001-01-01T23:55:00
* latitude (latitude) float64 54.99 54.98 54.98 54.97 ... 20.03 20.02 20.01
* longitude (longitude) float64 230.0 230.0 230.0 ... 300.0 300.0 300.0
step timedelta64[ns] ...
surface float64 ...
valid_time (time) datetime64[ns] dask.array<chunksize=(288,), meta=np.ndarray>
Data variables:
unknown (time, latitude, longitude) float32 dask.array<chunksize=(70, 3500, 7000), meta=np.ndarray>
Attributes:
GRIB_edition: 2
GRIB_centre: 161
GRIB_centreDescription: 161
GRIB_subCentre: 0
Conventions: CF-1.7
institution: 161
history: 2022-09-10T14:50 GRIB to CDM+CF via cfgrib-0.9.1...
######## ERROR MESSAGE ########
Output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
d:\Dropbox\_Sharing\reprex\2022-9-9_writing_ncdf_fails\reprex\exporting_netcdfs_reduced.py in <cell line: 22>()
160 print(' ')
161 print("######## ERROR MESSAGE ########")
---> 162 ds.to_netcdf(fl_out_nc, encoding= {"unknown":{"zlib":True}})
File c:\Users\Daniel\anaconda3\envs\weather_gen_3\lib\site-packages\xarray\core\dataset.py:1882, in Dataset.to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute, invalid_netcdf)
1879 encoding = {}
1880 from ..backends.api import to_netcdf
-> 1882 return to_netcdf( # type: ignore # mypy cannot resolve the overloads:(
1883 self,
1884 path,
1885 mode=mode,
1886 format=format,
1887 group=group,
1888 engine=engine,
1889 encoding=encoding,
1890 unlimited_dims=unlimited_dims,
1891 compute=compute,
1892 multifile=False,
1893 invalid_netcdf=invalid_netcdf,
1894 )
File c:\Users\xxxxx\anaconda3\envs\weather_gen_3\lib\site-packages\xarray\backends\api.py:1219, in to_netcdf(dataset, path_or_file, mode, format, group, engine, encoding, unlimited_dims, compute, multifile, invalid_netcdf)
...
121 return arg
File <__array_function__ internals>:180, in where(*args, **kwargs)
MemoryError: Unable to allocate 19.2 GiB for an array with shape (210, 3500, 7000) and data type float32
Environment
windows 11 Home
xarray 2022.3.0
cfgrib 0.9.10.1
dask 2022.7.0
A functional workaround is to chunk by a dimension that is unchanged during reindexing. The following modification causes the script to run successfully:
ds = xr.open_dataset(
fldr_in_grib,
engine="cfgrib",
chunks={ "latitude": 875 },
backend_kwargs={ 'indexpath': '' }
)

Dynamically generate multiple tasks based on output dictionary from task in Airflow

I have a task in which the output is a dictionary with a list value in each key
#task(task_id="gen_dict")
def generate_dict():
...
return output_dict # output look like this {"A" : ["aa","bb", "cc"], "B" : ["dd","ee", "ff"]}
# my dag (Not mention the part of generating DAG and its properties)
start = DummyOperator(task_id="st")
end = DummyOperator(task_id="ed")
output = generate_dict()
for keys, values in output.items():
for v in values:
dm = DummyOperator(task_id=f"dm_{keys}_{v}")
dm >> end
start >> output
For this sample output above, it should create 6 dummy tasks which are dm_A_aa, dm_A_bb, dm_A_cc, dm_B_dd, dm_B_ee, dm_B_ff
But right now I'm facing the import error
AttributeError: 'XComArg' object has no attribute 'items'
Is it possible to do what I aim to do? If not, is it possible to do it using a list like ["aa", "bb", "cc", "dd", "ee", "ff"] instead?
The code in the question won't work as-is because the loop shown would run when the dag is parsed (happens when the scheduler starts up and periodically thereafter), but the data that it would loop over is not known until the task that generates it is actually run.
There are ways to do something similar though.
AIP-42 added the ability to map list data into task kwargs in airflow 2.3:
#task
def generate_lists():
# presumably the data below would come from a query executed at runtime
return [["aa", "bb", "cc"], ["dd", "ee", "ff"]]
#task
def use_list(the_list):
for item in the_list:
print(item)
with DAG(...) as dag:
use_list.expand(the_list=generate_lists())
The code above will create two tasks with output:
aa
bb
cc
dd
ee
ff
In 2.4 the expand_kwargs function was added. It's an alternative to expand (shown above) which operates on dicts instead.
It takes an XComArg referencing a list of dicts whose keys are the names of the arguments that you're mapping the data into. So the following code...
#task
def generate_dicts():
# presumably the data below would come from a query made at runtime
return [{"foo":6, "bar":7}, {"foo":8, "bar":9}]
#task
def two_things(foo, bar):
print(foo, bar)
with DAG(...) as dag:
two_things.expand_kwargs(generate_dicts())
... gives two tasks with output:
6 7
...and...
8 9
expand only lets you create tasks from the Cartesian product of the input lists, expand_kwargs lets you do the associating of data to kwargs at runtime.

piplinedRDD can't convert to dataframe using toDF

I have a pyspark dataframe contains rows of data seperated by comma. I want to split each row and apply LabeledPoints method to it. Then covnert it to dataframe.
Here is my code
import os.path
from pyspark.mllib.regression import LabeledPoint
import numpy as np
file_name = os.path.join('databricks-datasets', 'cs190', 'data-001', 'millionsong.txt')
raw_data_df = sqlContext.read.load(file_name, 'text')
rdd = raw_data_df.rdd.map(lambda line: line.split(',')).map(lambda seq:LabeledPoints(seq[0],seq[1:])).toDF()
It gives the following error message after apply .DF().
---------------------------------------------------------------------------
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 38.0 failed 1 times, most recent failure: Lost task 0.0 in stage 38.0 (TID 44, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
Py4JJavaError Traceback (most recent call last)
<ipython-input-65-dc4d86a8ee45> in <module>()
----> 1 rdd = raw_data_df.rdd.map(lambda line: line.split(',')).map(lambda seq:LabeledPoints(seq[0],seq[1:])).toDF()
2 print(type(rdd))
3 #print(rdd.take(5))
/databricks/spark/python/pyspark/sql/context.py in toDF(self, schema, sampleRatio)
62 [Row(name=u'Alice', age=1)]
63 """
---> 64 return sqlContext.createDataFrame(self, schema, sampleRatio)
65
66 RDD.toDF = toDF
/databricks/spark/python/pyspark/sql/context.py in createDataFrame(self, data, schema, samplingRatio)
421
422 if isinstance(data, RDD):
--> 423 rdd, schema = self._createFromRDD(data, schema, samplingRatio)
424 else:
425 rdd, schema = self._createFromLocal(data, schema)
/databricks/spark/python/pyspark/sql/context.py in _createFromRDD(self, rdd, schema, samplingRatio)
Answer found:
rdd = raw_data_df.map(lambda row: row['value'].split(',')).map(lambda seq:LabeledPoint(float(seq[0]),seq[1:])).toDF()
Here, I need to specifically reference each line of text using row['value'], even though there is only one feature in the row.

Import data into R - argument is empty

I am trying to use a R package called GOSemSim, it requires to import a lot of data into variables with a specific format like this:
data1 = c("one", "two", "three")
data2 = c("A", "B", "C")
When the list of data that I try to import into a variable is longer than 293 then I get the following error message:
argument 293 is empty
THere are no error with the "" or comma, I computed it with linux, it does not matter what data it is.
This is really weird basically, I tried on two computers with no luck. I tried to import it as a CSV file but the R package won't allow it.
Anyone knows why you cannot import more than 293 data?
Update:
Here is the code and my data at the same time, it is a one liner in R which has never been a problem for me!
OQ = c("GO:0000003", "GO:0000070", "GO:0000077", "GO:0000079", "GO:0000082", "GO:0000086", "GO:0000122", "GO:0000212", "GO:0000226", "GO:0000278", "GO:0000279", "GO:0000280", "GO:0000724", "GO:0000725", "GO:0000819", "GO:0000910", "GO:0001932", "GO:0002118", "GO:0002121", "GO:0002165", "GO:0003002", "GO:0003006", "GO:0006022", "GO:0006030", "GO:0006040", "GO:0006139", "GO:0006259", "GO:0006260", "GO:0006261", "GO:0006267", "GO:0006270", "GO:0006275", "GO:0006277", "GO:0006281", "GO:0006302", "GO:0006304", "GO:0006305", "GO:0006306", "GO:0006310", "GO:0006323", "GO:0006325", "GO:0006342", "GO:0006351", "GO:0006355", "GO:0006357", "GO:0006366", "GO:0006464", "GO:0006468", "GO:0006479", "GO:0006725", "GO:0006807", "GO:0006928", "GO:0006950", "GO:0006974", "GO:0006996", "GO:0007010", "GO:0007017", "GO:0007018", "GO:0007049", "GO:0007051", "GO:0007059", "GO:0007062", "GO:0007067", "GO:0007076", "GO:0007088", "GO:0007093", "GO:0007095", "GO:0007098", "GO:0007126", "GO:0007127", "GO:0007131", "GO:0007140", "GO:0007141", "GO:0007143", "GO:0007154", "GO:0007155", "GO:0007156", "GO:0007259", "GO:0007266", "GO:0007275", "GO:0007276", "GO:0007281", "GO:0007282", "GO:0007292", "GO:0007304", "GO:0007307", "GO:0007346", "GO:0007350", "GO:0007365", "GO:0007367", "GO:0007379", "GO:0007389", "GO:0007399", "GO:0007400", "GO:0007417", "GO:0007420", "GO:0007423", "GO:0007444", "GO:0007472", "GO:0007476", "GO:0007552", "GO:0007560", "GO:0008104", "GO:0008213", "GO:0008283", "GO:0008284", "GO:0008315", "GO:0008356", "GO:0009059", "GO:0009611", "GO:0009653", "GO:0009790", "GO:0009791", "GO:0009880", "GO:0009886", "GO:0009887", "GO:0009888", "GO:0009889", "GO:0009890", "GO:0009892", "GO:0009893", "GO:0009896", "GO:0009968", "GO:0009987", "GO:0010032", "GO:0010033", "GO:0010092", "GO:0010389", "GO:0010468", "GO:0010498", "GO:0010556", "GO:0010558", "GO:0010564", "GO:0010604", "GO:0010605", "GO:0010608", "GO:0010629", "GO:0010648", "GO:0010948", "GO:0014016", "GO:0014017", "GO:0014070", "GO:0016043", "GO:0016055", "GO:0016070", "GO:0016310", "GO:0016319", "GO:0016321", "GO:0016441", "GO:0016458", "GO:0016568", "GO:0016569", "GO:0016570", "GO:0016571", "GO:0016572", "GO:0017145", "GO:0018130", "GO:0019219", "GO:0019222", "GO:0019438", "GO:0019827", "GO:0019953", "GO:0022402", "GO:0022403", "GO:0022404", "GO:0022412", "GO:0022414", "GO:0022610", "GO:0023052", "GO:0023057", "GO:0030111", "GO:0030154", "GO:0030178", "GO:0030182", "GO:0030261", "GO:0030422", "GO:0030703", "GO:0030727", "GO:0031023", "GO:0031047", "GO:0031050", "GO:0031056", "GO:0031060", "GO:0031123", "GO:0031145", "GO:0031175", "GO:0031323", "GO:0031324", "GO:0031325", "GO:0031326", "GO:0031327", "GO:0031331", "GO:0031398", "GO:0031399", "GO:0031401", "GO:0031570", "GO:0031572", "GO:0031935", "GO:0032268", "GO:0032270", "GO:0032501", "GO:0032502", "GO:0032504", "GO:0032507", "GO:0032774", "GO:0032776", "GO:0032886", "GO:0033043", "GO:0033044", "GO:0033260", "GO:0033301", "GO:0033554", "GO:0034622", "GO:0034641", "GO:0034645", "GO:0034654", "GO:0034754", "GO:0034968", "GO:0035023", "GO:0035107", "GO:0035114", "GO:0035120", "GO:0035186", "GO:0035194", "GO:0035195", "GO:0035220", "GO:0035282", "GO:0035295", "GO:0035825", "GO:0036211", "GO:0036388", "GO:0040029", "GO:0042060", "GO:0042221", "GO:0042445", "GO:0043009", "GO:0043066", "GO:0043069", "GO:0043161", "GO:0043170", "GO:0043331", "GO:0043412", "GO:0043414", "GO:0043549", "GO:0043631", "GO:0043933", "GO:0044237", "GO:0044249", "GO:0044260", "GO:0044271", "GO:0044419", "GO:0044700", "GO:0044702", "GO:0044703", "GO:0044707", "GO:0044728", "GO:0044763", "GO:0044767", "GO:0044770", "GO:0044771", "GO:0044772", "GO:0044773", "GO:0044774", "GO:0044786", "GO:0044818", "GO:0044839", "GO:0044843", "GO:0044848", "GO:0045132", "GO:0045165", "GO:0045168", "GO:0045185", "GO:0045448", "GO:0045455", "GO:0045787", "GO:0045814", "GO:0045859", "GO:0045892", "GO:0045931", "GO:0045934", "GO:0046331", "GO:0046425", "GO:0046483", "GO:0046580", "GO:0046605", "GO:0046777", "GO:0048070", "GO:0048134", "GO:0048135", "GO:0048285", "GO:0048311", "GO:0048468", "GO:0048477", "GO:0048513", "GO:0048518", "GO:0048519", "GO:0048522", "GO:0048523", "GO:0048563", "GO:0048569", "GO:0048583", "GO:0048585", "GO:0048609", "GO:0048646", "GO:0048666", "GO:0048699", "GO:0048704", "GO:0048705", "GO:0048706", "GO:0048707", "GO:0048731", "GO:0048736", "GO:0048737", "GO:0048754", "GO:0048856", "GO:0048863", "GO:0048865", "GO:0048867", "GO:0048869", "GO:0050789", "GO:0050793", "GO:0050794", "GO:0050896", "GO:0051052", "GO:0051058", "GO:0051128", "GO:0051171", "GO:0051172", "GO:0051225", "GO:0051235", "GO:0051246", "GO:0051247", "GO:0051252", "GO:0051253", "GO:0051276", "GO:0051297", "GO:0051299", "GO:0051301", "GO:0051302", "GO:0051321", "GO:0051325", "GO:0051329", "GO:0051338", "GO:0051351", "GO:0051443", "GO:0051445", "GO:0051641", "GO:0051646", "GO:0051651", "GO:0051704", "GO:0051716", "GO:0051726", "GO:0051783", "GO:0051785", "GO:0060255", "GO:0060429", "GO:0060548", "GO:0060688", "GO:0060966", "GO:0060968", "GO:0060993", "GO:0061138", "GO:0065003", "GO:0065004", "GO:0065007", "GO:0070192", "GO:0070507", "GO:0070887", "GO:0070918", "GO:0071103", "GO:0071359", "GO:0071822", "GO:0071824", "GO:0071840", "GO:0071897", "GO:0071900", "GO:0072028", "GO:0072078", "GO:0072079", "GO:0072088", "GO:0080090", "GO:0090068", "GO:0090304", "GO:0090306", "GO:0098609", "GO:1901071", "GO:1901360", "GO:1901362", "GO:1901576", "GO:1901987", "GO:1901988", "GO:1901990", "GO:1901991", "GO:1902275", "GO:1902299", "GO:1902589", "GO:1902679", "GO:1902749", "GO:1903046", "GO:1903047", "GO:1903308", "GO:1903322", "GO:2000026", "GO:2000112", "GO:2000113", "GO:2001141")
The error message in itself is informative. If one tries to make it reproducible, it's best to work with small subsets. It usually helps to have a dead stare at your data before trying to reproduce the behavior. For example,
OQ = c("GO:0000003", "GO:2001141", )
Notice that there are two elements of this character vector. Or are they?
Error in c("GO:0000003", "GO:2001141", ) : argument 3 is empty
Number 3 is the key. R is expecting three elements. Notice the comma after the second element. Once you remove it, you'll be able to create the QQ variable. Scan your real example. I'm sure there's a , , somewhere.
EDIT
I tried copy/pasting your code into a script in Rstudio and it produced the error you describe. If you scroll right, you'll notice that syntax coloring is not working at around position 5000. I have folded the code so that it fits on screen and it runs fine.
This is how I folded the vector and it worked.
OQ = c("GO:0000003", "GO:0000070", "GO:0000077", "GO:0000079", "GO:0000082", "GO:0000086", "GO:0000122",
"GO:0000212", "GO:0000226", "GO:0000278", "GO:0000279", "GO:0000280", "GO:0000724", "GO:0000725",
"GO:0000819", "GO:0000910", "GO:0001932", "GO:0002118", "GO:0002121", "GO:0002165", "GO:0003002",
"GO:0003006", "GO:0006022", "GO:0006030", "GO:0006040", "GO:0006139", "GO:0006259", "GO:0006260",
"GO:0006261", "GO:0006267", "GO:0006270", "GO:0006275", "GO:0006277", "GO:0006281", "GO:0006302",
"GO:0006304", "GO:0006305", "GO:0006306", "GO:0006310", "GO:0006323", "GO:0006325", "GO:0006342",
"GO:0006351", "GO:0006355", "GO:0006357", "GO:0006366", "GO:0006464", "GO:0006468", "GO:0006479",
"GO:0006725", "GO:0006807", "GO:0006928", "GO:0006950", "GO:0006974", "GO:0006996", "GO:0007010",
"GO:0007017", "GO:0007018", "GO:0007049", "GO:0007051", "GO:0007059", "GO:0007062", "GO:0007067",
"GO:0007076", "GO:0007088", "GO:0007093", "GO:0007095", "GO:0007098", "GO:0007126", "GO:0007127",
"GO:0007131", "GO:0007140", "GO:0007141", "GO:0007143", "GO:0007154", "GO:0007155", "GO:0007156",
"GO:0007259", "GO:0007266", "GO:0007275", "GO:0007276", "GO:0007281", "GO:0007282", "GO:0007292",
"GO:0007304", "GO:0007307", "GO:0007346", "GO:0007350", "GO:0007365", "GO:0007367", "GO:0007379",
"GO:0007389", "GO:0007399", "GO:0007400", "GO:0007417", "GO:0007420", "GO:0007423", "GO:0007444",
"GO:0007472", "GO:0007476", "GO:0007552", "GO:0007560", "GO:0008104", "GO:0008213", "GO:0008283",
"GO:0008284", "GO:0008315", "GO:0008356", "GO:0009059", "GO:0009611", "GO:0009653", "GO:0009790",
"GO:0009791", "GO:0009880", "GO:0009886", "GO:0009887", "GO:0009888", "GO:0009889", "GO:0009890",
"GO:0009892", "GO:0009893", "GO:0009896", "GO:0009968", "GO:0009987", "GO:0010032", "GO:0010033",
"GO:0010092", "GO:0010389", "GO:0010468", "GO:0010498", "GO:0010556", "GO:0010558", "GO:0010564",
"GO:0010604", "GO:0010605", "GO:0010608", "GO:0010629", "GO:0010648", "GO:0010948", "GO:0014016",
"GO:0014017", "GO:0014070", "GO:0016043", "GO:0016055", "GO:0016070", "GO:0016310", "GO:0016319",
"GO:0016321", "GO:0016441", "GO:0016458", "GO:0016568", "GO:0016569", "GO:0016570", "GO:0016571",
"GO:0016572", "GO:0017145", "GO:0018130", "GO:0019219", "GO:0019222", "GO:0019438", "GO:0019827",
"GO:0019953", "GO:0022402", "GO:0022403", "GO:0022404", "GO:0022412", "GO:0022414", "GO:0022610",
"GO:0023052", "GO:0023057", "GO:0030111", "GO:0030154", "GO:0030178", "GO:0030182", "GO:0030261",
"GO:0030422", "GO:0030703", "GO:0030727", "GO:0031023", "GO:0031047", "GO:0031050", "GO:0031056",
"GO:0031060", "GO:0031123", "GO:0031145", "GO:0031175", "GO:0031323", "GO:0031324", "GO:0031325",
"GO:0031326", "GO:0031327", "GO:0031331", "GO:0031398", "GO:0031399", "GO:0031401", "GO:0031570",
"GO:0031572", "GO:0031935", "GO:0032268", "GO:0032270", "GO:0032501", "GO:0032502", "GO:0032504",
"GO:0032507", "GO:0032774", "GO:0032776", "GO:0032886", "GO:0033043", "GO:0033044", "GO:0033260",
"GO:0033301", "GO:0033554", "GO:0034622", "GO:0034641", "GO:0034645", "GO:0034654", "GO:0034754",
"GO:0034968", "GO:0035023", "GO:0035107", "GO:0035114", "GO:0035120", "GO:0035186", "GO:0035194",
"GO:0035195", "GO:0035220", "GO:0035282", "GO:0035295", "GO:0035825", "GO:0036211", "GO:0036388",
"GO:0040029", "GO:0042060", "GO:0042221", "GO:0042445", "GO:0043009", "GO:0043066", "GO:0043069",
"GO:0043161", "GO:0043170", "GO:0043331", "GO:0043412", "GO:0043414", "GO:0043549", "GO:0043631",
"GO:0043933", "GO:0044237", "GO:0044249", "GO:0044260", "GO:0044271", "GO:0044419", "GO:0044700",
"GO:0044702", "GO:0044703", "GO:0044707", "GO:0044728", "GO:0044763", "GO:0044767", "GO:0044770",
"GO:0044771", "GO:0044772", "GO:0044773", "GO:0044774", "GO:0044786", "GO:0044818", "GO:0044839",
"GO:0044843", "GO:0044848", "GO:0045132", "GO:0045165", "GO:0045168", "GO:0045185", "GO:0045448",
"GO:0045455", "GO:0045787", "GO:0045814", "GO:0045859", "GO:0045892", "GO:0045931", "GO:0045934",
"GO:0046331", "GO:0046425", "GO:0046483", "GO:0046580", "GO:0046605", "GO:0046777", "GO:0048070",
"GO:0048134", "GO:0048135", "GO:0048285", "GO:0048311", "GO:0048468", "GO:0048477", "GO:0048513",
"GO:0048518", "GO:0048519", "GO:0048522", "GO:0048523", "GO:0048563", "GO:0048569", "GO:0048583",
"GO:0048585", "GO:0048609", "GO:0048646", "GO:0048666", "GO:0048699", "GO:0048704", "GO:0048705",
"GO:0048706", "GO:0048707", "GO:0048731", "GO:0048736", "GO:0048737", "GO:0048754", "GO:0048856",
"GO:0048863", "GO:0048865", "GO:0048867", "GO:0048869", "GO:0050789", "GO:0050793", "GO:0050794",
"GO:0050896", "GO:0051052", "GO:0051058", "GO:0051128", "GO:0051171", "GO:0051172", "GO:0051225",
"GO:0051235", "GO:0051246", "GO:0051247", "GO:0051252", "GO:0051253", "GO:0051276", "GO:0051297",
"GO:0051299", "GO:0051301", "GO:0051302", "GO:0051321", "GO:0051325", "GO:0051329", "GO:0051338",
"GO:0051351", "GO:0051443", "GO:0051445", "GO:0051641", "GO:0051646", "GO:0051651", "GO:0051704",
"GO:0051716", "GO:0051726", "GO:0051783", "GO:0051785", "GO:0060255", "GO:0060429", "GO:0060548",
"GO:0060688", "GO:0060966", "GO:0060968", "GO:0060993", "GO:0061138", "GO:0065003", "GO:0065004",
"GO:0065007", "GO:0070192", "GO:0070507", "GO:0070887", "GO:0070918", "GO:0071103", "GO:0071359",
"GO:0071822", "GO:0071824", "GO:0071840", "GO:0071897", "GO:0071900", "GO:0072028", "GO:0072078",
"GO:0072079", "GO:0072088", "GO:0080090", "GO:0090068", "GO:0090304", "GO:0090306", "GO:0098609",
"GO:1901071", "GO:1901360", "GO:1901362", "GO:1901576", "GO:1901987", "GO:1901988", "GO:1901990",
"GO:1901991", "GO:1902275", "GO:1902299", "GO:1902589", "GO:1902679", "GO:1902749", "GO:1903046",
"GO:1903047", "GO:1903308", "GO:1903322", "GO:2000026", "GO:2000112", "GO:2000113", "GO:2001141")

Is there an 11 digits limit for time series numbers in x12 for R?

I am trying to use the x12 function in the x12 package for R.
My problem is, when using time series object (tso) with monthly data and each observation is a large number (11 or more digits), the function is making a spec file which x12a.exe (binaries) can not read.
x12 binaries does not allow the spec file to be wider then 132 column.
In my example, the spec file have 144 columns, which I believe give me this error message in R:"ERROR: Input record longer than limit : 133".
When I am using smaller numbers (fewer columns) in the spec file, there are no problem so far. When creating the spec file on my own, when using x12-arima for windows, I have never seen the problem before, because I always use the "free" format (one observation per line) for the series in x12-arima.
My question is: How do I make the format for the time series object = "free", or some how just one observation per line, in the "Rout.spc" file, while using x12 function in the x12 package for R?
I am using R version 2.15.2 and R-studio version 0.97.318
Attached is my example code in R-studio, output in R-console, and the spec file
"Rstudio"
library(x12)
alt <- read.csv2("alt.csv",header=T)
tal <- ts(data=alt,start=c(1995,4),freq=12)
x12path <- shortPathName("C:\\Dokumenter\\X_12_Arima_Program\\x12a\\x12a.exe")
x12tal <- x12(tso=tal,automdl=T,x12path=x12path,period=12,trendma=23)
"Console"
C:\Dokumenter\Eksperimentering\x12>md gra
C:\Dokumenter\Eksperimentering\x12>C:\DOKUME~1\X_12_A~2\x12a\x12a.exe Rout -g gra
X-12-ARIMA Seasonal Adjustment Program
Version Number 0.3 Build 192
Execution began Mar 12, 2013 23.46.25
Reading input spec file from Rout.spc
Storing any program output into Rout.out
Storing any program error messages into Rout.err
ERROR: Input record longer than limit : 133
Line 6: start=1995.4
^
ERROR: Expected an real number not "111"
Program error(s) halt execution for Rout.spc
Check error file Rout.err
Error messages generated from processing the X-12-ARIMA spec file
Rout.spc:
Error in readx12Out(file, freq_series = frequency(tso), start_series = start(tso), :
Error! No proper run of x12! Check your parameter settings.
"The spec file: Rout.spc"
series{
title="R Output for X12a"
decimals=2
start=1995.4
period=12
data=(
14056669449 12785389868 12772341230 12342935128 12081332395 12110109950 12367542268 12911930417 12836340370 12214486074 12057940408 11555540809
10002847699 9199284760 8704422249 8492914782 8507816348 8470254675 8665139772 8653204621 9177471163 9676069791 9483990311 9825510541
7613345714 7168896536 7527318694 7721174940 7584049271 7586159794 7411383039 7565724342 7555103032 7148551906 7792379395 7493885451
6636374143 6390731897 6160711917 6003196233 5955867663 5868369296 5858314348 6098506333 6297774946 6074680955 6132163345 5875098456
5198306672 4891946405 4875765641 4834436461 4835096514 4804664875 4684550404 4733459404 5056773308 4912329843 5080643820 4568733581
4286693348 3898776528 3872776341 3842469172 3756957390 3782676505 3924066331 3810475969 3943259720 3665136687 3962811976 3449264257
3120637669 2813261665 2692920289 2652153941 2557247524 2658115616 2777287302 2688976703 2712004412 2596430893 2520548046 2455531008
2429263753 2187017586 2181610529 2139024441 2008850781 2049874584 2110715482 2218937956 2565352715 2635375627 2598584163 2435211675
2433625715 2350144562 2298764466 2242464445 2288528533 2532374821 2696862060 2877128057 3086285374 3309497319 3684989376 3709283880
3483967873 3294407926 3465439983 3546006197 3526166213 3625899404 3774201496 3941610691 4325836434 4466576126 4115121591 4036118609
3824882119 3552896925 3649624960 3570454122 3622089655 3662984491 3601306018 3604389348 3620162022 3401732239 3158217491 2896252892
2800864675 2630474256 2668229303 2631120097 2343131082 2163910930 2108285015 2067601541 2099699134 1803097392 1742652674 1626660618
1560369744 1448264771 1419659828 1547101381 1310783818 1358686467 1300281852 1315247637 1380387680 1286158497 1329769957 1272124521
1185603967 1125238745 1217223861 1265616553 1222054134 1279497332 1499392605 1810208712 2314301847 2908395453 3388479445 3441615991
3432688695 3691000321 3891303059 4111250935 4258776704 4586315450 5050122946 5156728599 5550332779 5769588984 5943764465 6032516246
5765718572 5521116586 5498458566 5374456514 5130561755 5219814632 5542173962 6883624616 7744043244 7913799960 7416210299 7127265644
6790509897 6562709494 6390985216 6126897801 5855125688 6259675447 6439114484 6634617502 6771498442 6674343925 6295709586 5890916431
5545655270 5315444742 5205711894 5115065476 4648229650 4724377012 4816989052 5049928441 5041395923
)
}
transform{
function=auto
}
automdl {
maxorder=(3,2)
maxdiff=(1,1)
balanced=yes
savelog=(adf amd b5m mu)
}
forecast {
}
x11{
sigmalim=(1.5,2.5)
trendma=23
excludefcst=yes
final=(user)
appendfcst=yes
savelog=all
}

Resources