Python exacerbates that, because there's so many ways to do things, particularly the kinds of things I'm doing right now, like parsing text files with tables in human-readable but not-particularly-computer-readable formats.
Here's a tiny example: I needed to remove commas from strings like "1,207" before converting them to integers to graph them. I thought of using slicing, which is a powerful Python feature on lists, strings and more:
>>> string="1,207"It got ugly fast so I looked for something else:
>>> string[:string.find(',')] + string [string.find(',')+1:]
'1207'
>>> string="1,207"Splits are also powerful:
>>> string.replace(',','')
'1207'
>>> string = "1,207"Of course I could also define a simple "removefrom(char, string)" function:
>>> ''.join(string.split(','))
'1207'
>>> string="1,207"
>>> def removefrom(char, string):
... return string.replace(',','')
...
>>> removefrom(',', string)
'1207'
To find good ways of doing things, I end up browsing a lot of Python code online. That's frustrating because some sites have started hosting "sample code" mostly as a way to put advertisements on-screen and in pop-ups. And I'm sure the most trivial code review would identify plenty of areas my code could be much more Python-clever.
1 comment:
If you are doing a lot of text file crunching, I'd invite you to look into giving pyparsing a look. It is a 100% Python parser development module, with a number of built-in short cuts. Pyparsing takes a more verbose and object-oriented approach to parsing than tools such as RE's or lex/yacc/PLY/simpleparse/etc., but this tends to simplify the grammar development process.
The pyparsing wiki is at http://pyparsing.wikispaces.com.
Cheers!
-- Paul
Post a Comment