(05-16-2023, 05:38 PM)jacques_duflos Wrote: thanks for the advice. Lists with python are a nightmare for me.
Lists are one of the very cool features of Python, so better get acquainted with them.
(05-16-2023, 05:38 PM)jacques_duflos Wrote: another surely optimisable part of my script is when I triple the elements of the convex hull to the points and their two tangents. And then I flatten the list with a line of code I copied without understanding it from this page. Is there a more pythonic way of doing so ?
Code:
hull = [[i,i,i] for i in hull]
hull = [item for sublist in hull for item in sublist]
Yes, you can actually do it in one shot:
Code:
tripledHull=[p for p in hull for _ in range(3)]
And if your ps are really points (coordinate pairs) you can even get the completely flattened list of coordinates in one shot:
Code:
➤> hull=[(1.,2.),(3.,4.),(5.,6.)]
➤> pathCoords=[c for p in hull for _ in range(3) for c in p ]
➤> pathCoords
[1.0, 2.0, 1.0, 2.0, 1.0, 2.0, 3.0, 4.0, 3.0, 4.0, 3.0, 4.0, 5.0, 6.0, 5.0, 6.0, 5.0, 6.0]
The "comprehension" is really just a loop written backwards. You can see it like this:
Code:
pathcoords=[]
for p in hull: # we iterate the points[/font]
for _ in range(3): # doing the same thing three times[/font]
for c in p: # iterate the coordinates in the point[/font]
pathCoords.append(c)
(05-16-2023, 05:38 PM)jacques_duflos Wrote: Another curiosity in my script is the line I delete doubles. I use the line found in this page, but is it not over-engineer to use dictionaries for such a common task ?
Code:
listOfPointsTupple = list(dict.fromkeys(listOfPointsTupple))
A slightly simpler way would be to go through a set: uniques=list(set(duplicates)). However, the set has no order, so nothing says you will be getting the points in hull order, which is important of you are making a path from it. On the other hand dictionary keys will always be listed in the order in which they have been added (so they are technically an "ordered set"). The core of python is built on very efficient hash maps and dictionaries and sets, being implemented with such maps, are very efficient. So, in the general case, using that technique has some merit. The case where ou could do better is when duplicates are guaranteed to be consecutive, in which case you could avoid adding them to the list is they are equate to targetList[-1].
Edit: Major snag: dicts are ordered since Python 3.6. Since you are in Python 2.7, you have to explicitly use an OrderedDict:
Code:
➤> from collections import OrderedDict
➤> uniques=list(OrderedDict.fromkeys(duplicates))