Python equivalent of R list() - r

R's list() allow labelled elements as well, is there an equivalent way in Python to achieve the following?
list("prob", "topTalent", name="Roger")

The Python documentation at https://docs.python.org/3/tutorial/introduction.html implies that you can create recursive structures (the proper R term for structures that can have a tree-characteristic) of varying types with the "[" operator:
>>> a = ['a', 'b', 'c']
>>> n = [1, 2, 3]
>>> x = [a, n]
>>> x
[['a', 'b', 'c'], [1, 2, 3]]
I'm just an R guy but that would seem to imply that Python's "list" data-type strongly resembles R's list type.
To get named "recursive" structures, it appears one needs to use a "dictionary" ( created with flanking "{","}" ).
>>> x = {'a':a, 'n':n}
>>> x
{'a': ['a', 'b', 'c'], 'n': [1, 2, 3]}
It appears that Python requires names for its dictionary entries while R allows both named and unnamed entries in a list.
>>> x = {'a':a, 'n':n, 'z':[1,2,3], 'zz':{'s':[4,5,6], 'd':['t','y']} }
>>> x
{'a': ['a', 'b', 'c'], 'n': [1, 2, 3], 'z': [1, 2, 3], 'zz': {'s': [4, 5, 6], 'd': ['t', 'y']}}
The accession from Python dicts resembles the access to items when using R:
>>> x['zz']
{'s': [4, 5, 6], 'd': ['t', 'y']}
>>> x['zz']['s']
[4, 5, 6]

There's no equivalent. Python lists have nothing like R's names, and OrderedDict (as suggested in the comments) does not allow the equivalent of unnamed elements or duplicate names, as well as not supporting access by element position.
A dict would be the most common way of associating objects with names in Python, but it's still very different from an R list with names. You could certainly create your own class attempting to mimic the equivalent R data structure, perhaps subclassing list or collections.UserList, but you'd have to implement a lot of functionality yourself, and existing functions you pass your object to wouldn't know what to do with the names.

Related

How to use the map function in haskell?

I'm trying to use map to return a list of lists. But i keep getting an error. I know map takes in a function and then uses that function. But i keep getting an error on it.
map (take 3) [1,2,3,4,5]
This is supposed to return [[1,2,3],[2,3,4],[3,4,5]], but it returns this error
<interactive>:6:1: error:
• Non type-variable argument in the constraint: Num [a]
(Use FlexibleContexts to permit this)
• When checking the inferred type
it :: forall a. Num [a] => [[a]]
is it hitting null is that why?
Let's take a look at exactly what the error message is saying.
map (take 3) [1, 2, 3, 4, 5]
map's type signature is
map :: (a -> b) -> [a] -> [b]
So it takes a function from a to b and returns a function from [a] to [b]. In your case, the function is take 3, which takes a list and returns a list. So a and b are both [t]. Therefore, the second argument to map should be [[t]], a list of lists. Now, Haskell looks at the second argument and sees that it's a list of numbers. So it says "How can I make a number into a list?" Haskell doesn't know of any good way to do that, so it complains that it doesn't know any type Num [t].
Now, as for what you meant to do, I believe it was mentioned in the comments. The tails function1 takes a list and returns the list of all tails of that list. So
tails [1, 2, 3, 4, 5]
-- ==> [[1, 2, 3, 4, 5], [2, 3, 4, 5], [3, 4, 5], [4, 5], [5], []]
Now you can apply the take function to each argument.
map (take 3) (tails [1, 2, 3, 4, 5])
-- ==> [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5], [5], []]
Oops! We've got some extra values we don't want. We only want the values that have three elements in them. So let's filter out the ones we don't want. filter takes a predicate (which is just a fancy way of saying "a function that returns a Boolean) and a list and returns a list containing only the elements that satisfy the predicate. The predicate we want is one that takes a list and returns whether or not that list has three elements.
\x -> ... -- We want only the lists
\x -> length x ... -- whose length
\x -> length x == 3 -- is exactly equal to 3
So that's our function. Now we pass that to filter.
filter (\x -> length x == 3) (map (take 3) (tails [1, 2, 3, 4, 5]))
-- ==> [[1, 2, 3], [2, 3, 4], [3, 4, 5]]
[1] Note that you may need to import Data.List to get the tails function.

Indexing an array with a tuple

Suppose I have a tuple of (1, 2, 3) and want to index a multidimensional array with it such as:
index = (1, 2, 3)
table[index] = 42 # behaves like table[1][2][3]
index has an unknown number of dimensions, so I can't do:
table[index[0]][index[1]][index[2]]
I know I could do something like this:
functools.reduce(lambda x, y: x[y], index, table)
but it's utterly ugly (and maybe also inefficient), so I wonder if there's a better, more Pythonic choice.
EDIT: Maybe a simple loop is best choice:
elem = table
for i in index:
elem = elem[i]
EDIT2: Actually, there's a problem with both solutions: I can't assign a value to the indexed array :-(, back to ugly:
elem = table
for i in index[:-1]:
elem = elem[i]
elem[index[-1]] = 42
The question is very interesting and also your suggested solution looks good (havn't checked it, but this kind of problem requires a recursive treatment and you just did it in one line).
However, the pythonic way I use in my programs is to use dictionaries of tuples. The syntax is array-like, the performance - of a dictionary, and there was no problem in it for me.
For example:
a = {(1, 2, 3): 'A', (3, 4, 5): 'B', (5, 6, 7, 8): 'C'}
print a[1, 2, 3]
print a[5, 6, 7, 8]
Will output:
A
B
And assigning to an index is super easy:
a[1, 4, 5] = 42. (But you might want to first check that (1, 4, 5) is within the dict, or else it will be created by the assignment)

Appending data to an AT Field using transmogrifier

I have a CSV file of data like this:
1, [a, b, c]
2, [a, b, d]
3, [a]
and some Plone objects which should be updated like this:
ID, LinesField
a, [1,2,3]
b, [1,2]
c, [1]
d, [2]
So, to clarify, the object with the id a is named on lines 1, 2 and 3 of the CSV, and thus the LinesField property of object a needs to have those line ids (the first number on the line) listed.
Ideally I'd like to use Transmogrifier to import this information (and avoid doing any manipulation in Excel beforehand), and I can see two ways, theoretically of doing this, but I can't work out how to do this in practice. I'd be grateful for some pointers to examples. I think that either I need to transform the entire pipeline so that the items reflect the structure of my Plone objects and then use the ATSchemaUpdater blueprint, but I can't see any examples on how to add items to the pipeline (do I need to write my own blueprint?) Or, alternatively I could loop through the items as they exist and append the value in the left column to the items in the list in the right. For that I need a way of appending values with ATSchemaUpdater rather than overwriting them - again, is there a blueprint for that anywhere?
Here's a few sample csv lines:
"Name","Themes"
"Bessie Brown","cah;cab;cac"
"Fred Blogs","cah;cac"
"Dinah Washington","cah;cab"
The Plone object will be a theme and the lines field a list of names:
cah, ['Bessie Brown', 'Fred Boggs' etc etc]
I'm not pretty sure you want to read the CVS file using transmogrifier, but I think you can create a section to insert these values to the items in the pipeline using a function like this:
def transpose(cvs):
keys = []
[keys.extend(v) for v in cvs.values()]
keys = set(keys)
d = {}
for key in keys:
values = [k for k, v in cvs.iteritems() if key in v]
d[key] = values
return d
In this context, cvs is {1: ['a', 'b', 'c'], 2: ['a', 'b', 'd'], 3: ['a']}; keys will contain all possible values set(['a', 'c', 'b', 'd']); and d will be what you want {'a': [1, 2, 3], 'c': [1], 'b': [1, 2], 'd': [2]}.
Probably there are better ways to do it, but I'm not a Python magician.
The insert section could look like this one:
class Insert(object):
"""Insert new keys into items.
"""
classProvides(ISectionBlueprint)
implements(ISection)
def __init__(self, transmogrifier, name, options, previous):
self.previous = previous
self.new_keys = transpose(cvs)
def __iter__(self):
for item in self.previous:
item.update(self.new_keys)
yield item
After that you can use the SchemaUpdater section.

dynamic values in kwargs

I have a layer which helps me populating records from the form to tables and viceversa, it does some input checking, etc.
Now several methods of this layer which are called several times in different parts of the webform take the same parameters, so I wanted to pack them at the begining of the codefile.
kwargs(): return
{"tabla":"nombre_tabla","id":[hf_id.Value]
,"container": Panel1,"MsgBox1":
MsgBox1}
then I call
IA.search(**kwargs)
but doing that way the values of the dictionary get fixed with the ones they had in the begining, and one of them is retrieved from a webcontrol so it needs to be dynamic. So I wrapped them in a function
def kwargs(): return
{"tabla":"nombre_tabla",
"id":[hf_id.Value] ,"container":
Panel1,"MsgBox1": MsgBox1}
and then I call
IA.search(*kwargs())
IA.save(*kwargs())
etc.
and that way the value of the dictionary which comes from the webform (hf_id) is dynamic and not fixed. But I was wondering if in this case there is another way, a pythonic way, to get the values of the dictionary kwargs to be dynamic and not fixed
Python objects are pointers (though they are not directly manipulatable by the user.)
So if you create a list like this:
>>> a = [1, 2, 3]
and then store it in a dictionary:
>>> b = { 'key': a, 'anotherkey': 'spam' }
you will find modifications to the value in the dictionary also modify the original list:
>>> b['key'].append(4)
>>> print b['key']
[1, 2, 3, 4]
>>> print a
[1, 2, 3, 4]
If you want a copy of an item, so that modifications will not change the original item, then use the copy module.
>>> from copy import copy
>>> a = [1, 2, 3]
>>> b['key'] = copy(a)
>>> print b['key']
[1, 2, 3]
>>> b['key'].append(4)
>>> print b['key']
[1, 2, 3, 4]
>>> print a
[1, 2, 3]

How can I get a flat result from a list comprehension instead of a nested list?

I have a list A, and a function f which takes an item of A and returns a list. I can use a list comprehension to convert everything in A like [f(a) for a in A], but this returns a list of lists. Suppose my input is [a1,a2,a3], resulting in [[b11,b12],[b21,b22],[b31,b32]].
How can I get the flattened list [b11,b12,b21,b22,b31,b32] instead? In other words, in Python, how can I get what is traditionally called flatmap in functional programming languages, or SelectMany in .NET?
(In the actual code, A is a list of directories, and f is os.listdir. I want to build a flat list of subdirectories.)
See also: How do I make a flat list out of a list of lists? for the more general problem of flattening a list of lists after it's been created.
You can have nested iterations in a single list comprehension:
[filename for path in dirs for filename in os.listdir(path)]
which is equivalent (at least functionally) to:
filenames = []
for path in dirs:
for filename in os.listdir(path):
filenames.append(filename)
>>> from functools import reduce # not needed on Python 2
>>> list_of_lists = [[1, 2],[3, 4, 5], [6]]
>>> reduce(list.__add__, list_of_lists)
[1, 2, 3, 4, 5, 6]
The itertools solution is more efficient, but this feels very pythonic.
You can find a good answer in the itertools recipes:
import itertools
def flatten(list_of_lists):
return list(itertools.chain.from_iterable(list_of_lists))
The question proposed flatmap. Some implementations are proposed but they may unnecessary creating intermediate lists. Here is one implementation that's based on iterators.
def flatmap(func, *iterable):
return itertools.chain.from_iterable(map(func, *iterable))
In [148]: list(flatmap(os.listdir, ['c:/mfg','c:/Intel']))
Out[148]: ['SPEC.pdf', 'W7ADD64EN006.cdr', 'W7ADD64EN006.pdf', 'ExtremeGraphics', 'Logs']
In Python 2.x, use itertools.map in place of map.
You could just do the straightforward:
subs = []
for d in dirs:
subs.extend(os.listdir(d))
You can concatenate lists using the normal addition operator:
>>> [1, 2] + [3, 4]
[1, 2, 3, 4]
The built-in function sum will add the numbers in a sequence and can optionally start from a specific value:
>>> sum(xrange(10), 100)
145
Combine the above to flatten a list of lists:
>>> sum([[1, 2], [3, 4]], [])
[1, 2, 3, 4]
You can now define your flatmap:
>>> def flatmap(f, seq):
... return sum([f(s) for s in seq], [])
...
>>> flatmap(range, [1,2,3])
[0, 0, 1, 0, 1, 2]
Edit: I just saw the critique in the comments for another answer and I guess it is correct that Python will needlessly build and garbage collect lots of smaller lists with this solution. So the best thing that can be said about it is that it is very simple and concise if you're used to functional programming :-)
subs = []
map(subs.extend, (os.listdir(d) for d in dirs))
(but Ants's answer is better; +1 for him)
import itertools
x=[['b11','b12'],['b21','b22'],['b31']]
y=list(itertools.chain(*x))
print y
itertools will work from python2.3 and greater
You could try itertools.chain(), like this:
import itertools
import os
dirs = ["c:\\usr", "c:\\temp"]
subs = list(itertools.chain(*[os.listdir(d) for d in dirs]))
print subs
itertools.chain() returns an iterator, hence the passing to list().
This is the most simple way to do it:
def flatMap(array):
return reduce(lambda a,b: a+b, array)
The 'a+b' refers to concatenation of two lists
You can use pyxtension:
from pyxtension.streams import stream
stream([ [1,2,3], [4,5], [], [6] ]).flatMap() == range(7)
Google brought me next solution:
def flatten(l):
if isinstance(l,list):
return sum(map(flatten,l))
else:
return l
If listA=[list1,list2,list3]
flattened_list=reduce(lambda x,y:x+y,listA)
This will do.

Resources