Complex number arithmetic in Tcl?
Why don’t you try this: http://wiki.tcl.tk/11415 or something like this too: http://wiki.tcl.tk/13885 I hope these are easy to use alternatives for the mentioned utility.
Why don’t you try this: http://wiki.tcl.tk/11415 or something like this too: http://wiki.tcl.tk/13885 I hope these are easy to use alternatives for the mentioned utility.
It seems that you have a binary file with text on a fixed or otherwise deducible position. Get-Content might help you but… It’ll try to parse the entire file to an array of strings and thus creating an array of “garbage”. Also, you wouldn’t know from what file position a particular “rope of characters” was. … Read more
Pure PEG cannot parse indentation. But peg.js can. I did a quick-and-dirty experiment (being inspired by Ira Baxter’s comment about cheating) and wrote a simple tokenizer. For a more complete solution (a complete parser) please see this question: Parse indentation level with PEG.js /* Initializations */ { function start(first, tail) { var done = [first[1]]; … Read more
You can just use drop: val iter = src.getLines().drop(1).map(_.split(“:”)) From the documentation: def drop (n: Int) : Iterator[A]: Advances this iterator past the first n elements, or the length of the iterator, whichever is smaller.
For the case when you know how many columns of data there will be in your CSV file, one simple call to textscan like Amro suggests will be your best solution. However, if you don’t know a priori how many columns are in your file, you can use a more general approach like I did … Read more
First of all in real case lexing and parsing is time critical. Especially if you need to process tokens before parsing. For example — filtering and collecting of comments or resolving of context-depended conflicts. In this case parser often wait for a lexer. The answer for a question. You can run lexing and parsing concurrently … Read more
The simple answer is “Yes”. In the abstract, you don’t need lexers at all. You could simply write a grammer that used individual characters as tokens (and in fact that’s exactly what SGLR parsers do, but that’s a story for another day). You need lexers because parsers built using characters as primitive elements aren’t as … Read more
All of the other answers have missed how hard it is to do this properly. You can do a first cut approach at this which is accurate to a certain extent, but until you take into account IEEE rounding modes (et al), you will never have the right answer. I’ve written naive implementations before with … Read more
Several of these other answers are very good. I’ll try to fill in some things they haven’t mentioned. EDI is a set of standards, the most common of which are: ANSI X12 (popular in the states) EDIFACT (popular in Europe) Sounds like you’re looking at X12 version 4010. That’s the most widely used (in my … Read more
What you missed is whitespace. I threw in a couple bonus improvements. import scala.util.parsing.combinator._ object CSV extends RegexParsers { override protected val whiteSpace = “””[ \t]”””.r def COMMA = “,” def DQUOTE = “\”” def DQUOTE2 = “\”\”” ^^ { case _ => “\”” } def CR = “\r” def LF = “\n” def CRLF … Read more