How can I read and parse CSV files in C++?
Asked Answered
D

39

335

I need to load and use CSV file data in C++. At this point it can really just be a comma-delimited parser (ie don't worry about escaping new lines and commas). The main need is a line-by-line parser that will return a vector for the next line each time the method is called.

I found this article which looks quite promising: http://www.boost.org/doc/libs/1_35_0/libs/spirit/example/fundamental/list_parser.cpp

I've never used Boost's Spirit, but am willing to try it. But only if there isn't a more straightforward solution I'm overlooking.

Disembroil answered 13/7, 2009 at 15:25 Comment(7)
I have looked at boost::spirit for parsing. It is more for parsing grammars thank parsing a simple file format. Someone on my team was trying to use it to parse XML and it was a pain to debug. Stay away from boost::spirit if possible.Misjudge
Sorry chrish, but that's terrible advice. Spirit isn't always an appropriate solution but I've used it - and continue to use it - successfully in a number of projects. Compared to similar tools (Antlr, Lex/yacc etc) it has significant advantages. Now, for parsing CSV it's probably overkill...Playacting
@Playacting IMHO spirit is pretty hard to use for a parser combinator library. Having had some (very pleasant) experience with Haskells (atto)parsec libraries I expected it (spirit) to work similarly well, but gave up on it after fighting with 600 line compiler errors.Spruik
C CSV Parser: sourceforge.net/projects/cccsvparser C CSV Writer: sourceforge.net/projects/cccsvwriterBoom
Why don't you want to escape commas and new lines! Every search links to this question and I could not find one answer that considers the escaping! :|Chaffer
@Boom Thanks for the link. I was searching for good csv reader and I found cccsvparser great. ThanksLitt
In case you will need a proper parser have a look at these options: cpp.libhunt.com/fast-cpp-csv-parser-alternativesObject
R
366

If you don't care about escaping comma and newline,
AND you can't embed comma and newline in quotes (If you can't escape then...)
then its only about three lines of code (OK 14 ->But its only 15 to read the whole file).

std::vector<std::string> getNextLineAndSplitIntoTokens(std::istream& str)
{
    std::vector<std::string>   result;
    std::string                line;
    std::getline(str,line);

    std::stringstream          lineStream(line);
    std::string                cell;

    while(std::getline(lineStream,cell, ','))
    {
        result.push_back(cell);
    }
    // This checks for a trailing comma with no data after it.
    if (!lineStream && cell.empty())
    {
        // If there was a trailing comma then add an empty element.
        result.push_back("");
    }
    return result;
}

I would just create a class representing a row.
Then stream into that object:

#include <iterator>
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include <string>

class CSVRow
{
    public:
        std::string_view operator[](std::size_t index) const
        {
            return std::string_view(&m_line[m_data[index] + 1], m_data[index + 1] -  (m_data[index] + 1));
        }
        std::size_t size() const
        {
            return m_data.size() - 1;
        }
        void readNextRow(std::istream& str)
        {
            std::getline(str, m_line);

            m_data.clear();
            m_data.emplace_back(-1);
            std::string::size_type pos = 0;
            while((pos = m_line.find(',', pos)) != std::string::npos)
            {
                m_data.emplace_back(pos);
                ++pos;
            }
            // This checks for a trailing comma with no data after it.
            pos   = m_line.size();
            m_data.emplace_back(pos);
        }
    private:
        std::string         m_line;
        std::vector<int>    m_data;
};

std::istream& operator>>(std::istream& str, CSVRow& data)
{
    data.readNextRow(str);
    return str;
}   
int main()
{
    std::ifstream       file("plop.csv");

    CSVRow              row;
    while(file >> row)
    {
        std::cout << "4th Element(" << row[3] << ")\n";
    }
}

But with a little work we could technically create an iterator:

class CSVIterator
{   
    public:
        typedef std::input_iterator_tag     iterator_category;
        typedef CSVRow                      value_type;
        typedef std::size_t                 difference_type;
        typedef CSVRow*                     pointer;
        typedef CSVRow&                     reference;

        CSVIterator(std::istream& str)  :m_str(str.good()?&str:nullptr) { ++(*this); }
        CSVIterator()                   :m_str(nullptr) {}

        // Pre Increment
        CSVIterator& operator++()               {if (m_str) { if (!((*m_str) >> m_row)){m_str = nullptr;}}return *this;}
        // Post increment
        CSVIterator operator++(int)             {CSVIterator    tmp(*this);++(*this);return tmp;}
        CSVRow const& operator*()   const       {return m_row;}
        CSVRow const* operator->()  const       {return &m_row;}

        bool operator==(CSVIterator const& rhs) {return ((this == &rhs) || ((this->m_str == nullptr) && (rhs.m_str == nullptr)));}
        bool operator!=(CSVIterator const& rhs) {return !((*this) == rhs);}
    private:
        std::istream*       m_str;
        CSVRow              m_row;
};


int main()
{
    std::ifstream       file("plop.csv");

    for(CSVIterator loop(file); loop != CSVIterator(); ++loop)
    {
        std::cout << "4th Element(" << (*loop)[3] << ")\n";
    }
}

Now that we are in 2020 lets add a CSVRange object:

class CSVRange
{
    std::istream&   stream;
    public:
        CSVRange(std::istream& str)
            : stream(str)
        {}
        CSVIterator begin() const {return CSVIterator{stream};}
        CSVIterator end()   const {return CSVIterator{};}
};

int main()
{
    std::ifstream       file("plop.csv");

    for(auto& row: CSVRange(file))
    {
        std::cout << "4th Element(" << row[3] << ")\n";
    }
}
Randolf answered 13/7, 2009 at 15:37 Comment(37)
This is exactly what I wanted! Now, some extra credit..how would I make this into a class with a constructor and two methods: firstLine() and nextLine(). std::istream doesn't have a default constructor..so what do I use instead? Thanks for the help!!Disembroil
first() next(). What is this Java! Only Joking.Randolf
or you could use some boost libraries to parse csv ... see belowEdwardedwardian
This code just saved me hours. I don't usually use C++, but needed to resort to it to write a quick parser. This is a great boilerplate, and the code even compiles.Chiropractic
@conradlee: I always try and write code that will compile :-) Glad it helped. But as stefanB suggests you could also look at boost. Boost has a whole bunch of stuff that makes C++ easier including parser code.Randolf
one of the worst thing is to override == and != opearators. it s just wrong.Rinaldo
@DarthVader: An overlay broad statement that by its broadness is silly. If you would like to clarify why it is bad and then why this badness applies in this context.Randolf
so u dont think it is bad? u think it s silly to think it s bad?Rinaldo
@DarthVader: I think it is silly to make broad generalizations. The code above works correctly so I can actually see anything wrong with it. But if you have any specific comment on the above I will definitely consider in in this context. But I can see how you can come to that conclusion by mindlessly following a set of generalized rules for C# and applying it to another language.Randolf
+++1. Thanks for Iterator example. CSV done simply in so few lines of code with good design. One can take this and handle quotes and other dialects of CSV.Aperiodic
also, if you run into weird linking problems with the above code because another library somewhere defines istream::operator>> (like Eigen), add an inline before the operator declaration to fix it.Edgerton
@sebastian_k: I think you are confused about what you solved by adding inline :-( . Namespace conflicts should be resolved by using putting stuff in an explicit namespace. inline will help the linker with problems because you put the definition in a header file and included it multiple times.Randolf
The parsing part is missing, one still ends up with strings. This is just an over-engineered line splitter.Dufour
Why can't i use getline(stringstream(line),cell,',') instead of stringstream lineStream(line); getline(lineStream,cell,',');?Gyno
@kirill_igum: That's a good question that deserves its own set of answers. Better to ask on the main site rather than in comments.Randolf
This is the simplest and cleanest example of how to make an iterator class I've ever seen.Metalanguage
the iterator is not reading the last line. what modifications should i make so that the last line will be read? thanksChumley
@tonytz: Works fine for me. I have a chat room. Post your code and I will see if there is an issue chat.stackoverflow.com/rooms/113764/csvRandolf
Thanks! change the implementation of operator++() to the following code to fix the "last line not imported"-error: CSVIterator& operator++() {if (m_str) { if (!((*m_str) >> m_row)){;m_str = NULL;}}return *this;} If you're interested, find the explaination in Loki's chatroom. Thanks for this great piece of code!Dwightdwindle
@StefanWoe: That line was placed in the code above many moons ago.Randolf
:D oh I was using the code since months and just discovered the bug while testing, so I went back to this page. So yeah, the above code is safe to use! Great work @LokiAstari ! :)Dwightdwindle
I find that a line like: a,b, only adds two values to m_data. I think it should add three (the third one being an empty string). I worked around this by adding m_data.push_back(""); after the while loop. Since I'm accessing the cells by index, the extra values don't bother me.Combined
@MatrixManAtYrService: Without any testing I would try: if (!lineStream){m_data.push_back("");} So that it only pushes an empty value when there was a comma with nothing after it. If you test that and it works I'll update the answer above.Randolf
@LokiAstari That looks much better than my hack. I tested it out and it does the trick.Combined
@LokiAstari Could you explain a bit on if (!lineStream)? Does the case where there's no trailing comma also pass this conditional statement? Also, for m_data.push_back(""), where does m_data come from? Shouldn't it be result.push_back("")?Bim
@Bim Sorry. Typo (when fixing a complaint from comments). Yes the if (!lineStream) needs more (fixed). And m_data should be result (fixed).Randolf
@LokiAstari Maybe just if (cell.empty())? It seems !lineStream is also true for normal line. Also, it seems that the std::vector result would also append a "" at the end of the vector. I think this is because that right before the end-of-file condition is reached, there's just another empty cell between \n and end-of-file condition in a csv file.Bim
So this setup works for string characters? If I want to read integer or float numbers?Hydra
@drizo: You have a string you can convert that to any type by de-serializing it into an object of your appropriate type. There are several ways to de-serialize the object so that is a question you should ask on SO.Randolf
@Rinaldo it's incorrect to not overload operator!= for a class when you have a member alias typedef std::input_iterator_tag iterator_category; see InputIteratorCabinetwork
See also an interesting library implementing iterators and ranges as an iostream-like class: github.com/roman-kashitsyn/text-csvSaldana
is there a way to have a function foo with an argument n that reads the csv file until line n? Specifically, first time foo is called with arg n1, it reads the file from line 1 to line n; second time it is called with arg n2, it reads the file from n1+1 till n2, and so on...Tonometer
@Tonometer That would be trivial. Simply keep the ifstream file; object at the top level. Pass the file and the value n to the function. The function is then simply a loop that reads n lines.Randolf
The text about escaping is too complicated. I understand that there is some problem with escaping, but I don't understand exactly which limitations this code has. The text literally says "If you can't escape then..." so it's probably too obvious to state explicitly, but I don't get it. Still useful if you don't care about escaping.Subjugate
@anatolyg: sometimes you want to include the comma character in text. This code can not distinguish between a comma that separates fields and a comma that is part of the text. So for this code a comma will always separate fields.Randolf
@MartinYork if this is the same as you wanted to write in your answer, this is clearer. But does the description at the beginning of the answer say anything else/different? It's unclear to me.Subjugate
@Rinaldo advice for C# is applicable to C#. It is required for an iterator type in C++ to implement !=.Cabinetwork
P
85

My version is not using anything but the standard C++11 library. It copes well with Excel CSV quotation:

spam eggs,"foo,bar","""fizz buzz"""
1.23,4.567,-8.00E+09

The code is written as a finite-state machine and is consuming one character at a time. I think it's easier to reason about.

#include <istream>
#include <string>
#include <vector>

enum class CSVState {
    UnquotedField,
    QuotedField,
    QuotedQuote
};

std::vector<std::string> readCSVRow(const std::string &row) {
    CSVState state = CSVState::UnquotedField;
    std::vector<std::string> fields {""};
    size_t i = 0; // index of the current field
    for (char c : row) {
        switch (state) {
            case CSVState::UnquotedField:
                switch (c) {
                    case ',': // end of field
                              fields.push_back(""); i++;
                              break;
                    case '"': state = CSVState::QuotedField;
                              break;
                    default:  fields[i].push_back(c);
                              break; }
                break;
            case CSVState::QuotedField:
                switch (c) {
                    case '"': state = CSVState::QuotedQuote;
                              break;
                    default:  fields[i].push_back(c);
                              break; }
                break;
            case CSVState::QuotedQuote:
                switch (c) {
                    case ',': // , after closing quote
                              fields.push_back(""); i++;
                              state = CSVState::UnquotedField;
                              break;
                    case '"': // "" -> "
                              fields[i].push_back('"');
                              state = CSVState::QuotedField;
                              break;
                    default:  // end of quote
                              state = CSVState::UnquotedField;
                              break; }
                break;
        }
    }
    return fields;
}

/// Read CSV file, Excel dialect. Accept "quoted fields ""with quotes"""
std::vector<std::vector<std::string>> readCSV(std::istream &in) {
    std::vector<std::vector<std::string>> table;
    std::string row;
    while (!in.eof()) {
        std::getline(in, row);
        if (in.bad() || in.fail()) {
            break;
        }
        auto fields = readCSVRow(row);
        table.push_back(fields);
    }
    return table;
}
Paleography answered 20/5, 2015 at 0:59 Comment(2)
The top answer didn't work for me, as I am on an older compiler. This answer worked, vector initialisation may require this: const char *vinit[] = {""}; vector<string> fields(vinit, end(vinit)); Knock
It cannot parse a newline in field - such CSV file is in C++ string "\"a\nb\"\n". At least from LibreOffice.org Calc. MS Excel 365 cannot export as CSV and I do not have native MS Excel.Kilian
K
55

Solution using Boost Tokenizer:

std::vector<std::string> vec;
using namespace boost;
tokenizer<escaped_list_separator<char> > tk(
   line, escaped_list_separator<char>('\\', ',', '\"'));
for (tokenizer<escaped_list_separator<char> >::iterator i(tk.begin());
   i!=tk.end();++i) 
{
   vec.push_back(*i);
}
Knavery answered 13/7, 2009 at 23:41 Comment(3)
The boost tokenizer doesn't fully support the complete CSV standard, but there are some quick workarounds. See https://mcmap.net/q/98258/-how-can-i-read-and-parse-csv-files-in-c/…Miscount
Do you have to have the whole boost library on your machine, or can you just use a subset of their code to do this? 256mb seems like a lot for CSV parsing..Masteratarms
@Masteratarms : You can use the bcp utility that comes with boost to extract only the headers you actually need.Fichte
H
34

The C++ String Toolkit Library (StrTk) has a token grid class that allows you to load data either from text files, strings or char buffers, and to parse/process them in a row-column fashion.

You can specify the row delimiters and column delimiters or just use the defaults.

void foo()
{
   std::string data = "1,2,3,4,5\n"
                      "0,2,4,6,8\n"
                      "1,3,5,7,9\n";

   strtk::token_grid grid(data,data.size(),",");

   for(std::size_t i = 0; i < grid.row_count(); ++i)
   {
      strtk::token_grid::row_type r = grid.row(i);
      for(std::size_t j = 0; j < r.size(); ++j)
      {
         std::cout << r.get<int>(j) << "\t";
      }
      std::cout << std::endl;
   }
   std::cout << std::endl;
}

More examples can be found Here

Hornbeck answered 25/9, 2009 at 2:15 Comment(2)
Though strtk supports doublequoted fields, and even stripping the surrounding quotes (via options.trim_dquotes = true), it doesn't support removing doubled doublequotes (e.g. the field "She said ""oh no"", and left." as the c-string "She said \"oh no\", and left."). You'll have to do that yourself.Bull
When using strtk, you'll also have to manually handle double-quoted fields that contain newline characters.Bull
E
32

You can use Boost Tokenizer with escaped_list_separator.

escaped_list_separator parses a superset of the csv. Boost::tokenizer

This only uses Boost tokenizer header files, no linking to boost libraries required.

Here is an example, (see Parse CSV File With Boost Tokenizer In C++ for details or Boost::tokenizer ):

#include <iostream>     // cout, endl
#include <fstream>      // fstream
#include <vector>
#include <string>
#include <algorithm>    // copy
#include <iterator>     // ostream_operator
#include <boost/tokenizer.hpp>

int main()
{
    using namespace std;
    using namespace boost;
    string data("data.csv");

    ifstream in(data.c_str());
    if (!in.is_open()) return 1;

    typedef tokenizer< escaped_list_separator<char> > Tokenizer;
    vector< string > vec;
    string line;

    while (getline(in,line))
    {
        Tokenizer tok(line);
        vec.assign(tok.begin(),tok.end());

        // vector now contains strings from one row, output to cout here
        copy(vec.begin(), vec.end(), ostream_iterator<string>(cout, "|"));

        cout << "\n----------------------" << endl;
    }
}
Edwardedwardian answered 24/2, 2010 at 0:2 Comment(4)
And if you want to be able to parse embedded new lines mybyteofcode.blogspot.com/2010/11/….Edwardedwardian
While this technique works, I have found it to have very poor performance. Parsing a 90000 line CSV file with ten fields per line takes around 8 seconds on my 2 GHz Xeon. The Python Standard Library csv module parses the same file in about 0.3 seconds.Tsarevitch
@Rob that's interesting - what does the Python csv do differently?Slicer
@RobSmallshire it's a simple example code not a high performance one. This code makes copies of all the fields per line. For higher performance you would use different options and return just references to fields in buffer instead of making copies.Edwardedwardian
G
29

It is not overkill to use Spirit for parsing CSVs. Spirit is well suited for micro-parsing tasks. For instance, with Spirit 2.1, it is as easy as:

bool r = phrase_parse(first, last,

    //  Begin grammar
    (
        double_ % ','
    )
    ,
    //  End grammar

    space, v);

The vector, v, gets stuffed with the values. There is a series of tutorials touching on this in the new Spirit 2.1 docs that's just been released with Boost 1.41.

The tutorial progresses from simple to complex. The CSV parsers are presented somewhere in the middle and touches on various techniques in using Spirit. The generated code is as tight as hand written code. Check out the assembler generated!

Garling answered 19/11, 2009 at 16:0 Comment(12)
Actually it is overkill, the compilation time hit is enormous and makes using Spirit for simple "micro-parsing tasks" unreasonable.Isolde
Also I'd like to point out that the code above does not parse CSV, it just parses a range of the type of the vector delimited by commas. It doesn't handle quotes, varying types of columns etc. In short 19 votes for something that does answer the question at all seems a bit suspicious to me.Isolde
@Isolde Nonsense. The compilation time hit for small parsers isn’t that big, but it’s also irrelevant because you stuff the code into its own compilation unit and compile it once. Then you only need to link it and that’s as efficient as it gets. And as for your other comment, there are as many dialects of CSV as there are processors for it. This one certainly isn’t a very useful dialect but it can be trivially extended to handle quoted values.Harpy
@konrad: Simply including "#include <boost/spirit/include/qi.hpp>" in an empty file with only a main and nothing else takes 9.7sec with MSVC 2012 on a corei7 running at 2.ghz. It's needless bloat. The accepted answer compiles in under 2secs on the same machine, I'd hate to imagine how long the 'proper' Boost.Spirit example would take to compile.Isolde
@Isolde It takes substantially less for me (~4sec) but like I said, that’s quite irrelevant since you only need to compile that TU once. The time you save implementing the parser easily offsets the compilation cost here. As for “proper” Boost.Sprit grammars: a big grammar can take several minutes to compile. But once again: that cost is offset easily by the ease of writing the parser, and this is not a continuous cost since you don’t need to recompile the parser every time you recompile the client code.Harpy
@Isolde I have to agree with you the overhead in using spirit for something as simple as cvs processing is far too great.Cammycamomile
I for one think that it depends entirely on the application you're writing. In my case, I might use boost because we already have it compiled, and even if Spirit is not oriented to performance, I want it to load initial data for tests. It's what, only 1-3 lines of code? Count me in, I don't have to write a parser, even if it would be easy.Shaia
@Shaia Actually, Spirit is ridiculously efficient. Yes, compilation takes long but execution speed trivially surpasses hand-written parsers, even non-naive implementations. To illustrate, the boost::qi::int_ parser is by far the most efficient method of all existing libraries, including hand-written code.Harpy
@KonradRudolph Wow, is that so? I'm impressed! I ended up using it for csv parsing in tests, it works great!Shaia
Agree with @KonradRudolph -- people concerned with performance here are concerned with compilation performance which is the opposite of the intention of Spirit and what coders are usually concerned with. This is a problem with c++ templates in general. Re the article, it would be nice to include a more complete example for csv parsing rather than just a link to a tutorial ...Beisel
The example shows a comma separated list of doubles, not a CSV.Bays
I like how the author of this answer is also the main author of the suggested library. Free advertisement? :DDrifter
D
18

If you DO care about parsing CSV correctly, this will do it...relatively slowly as it works one char at a time.

 void ParseCSV(const string& csvSource, vector<vector<string> >& lines)
    {
       bool inQuote(false);
       bool newLine(false);
       string field;
       lines.clear();
       vector<string> line;

       string::const_iterator aChar = csvSource.begin();
       while (aChar != csvSource.end())
       {
          switch (*aChar)
          {
          case '"':
             newLine = false;
             inQuote = !inQuote;
             break;

          case ',':
             newLine = false;
             if (inQuote == true)
             {
                field += *aChar;
             }
             else
             {
                line.push_back(field);
                field.clear();
             }
             break;

          case '\n':
          case '\r':
             if (inQuote == true)
             {
                field += *aChar;
             }
             else
             {
                if (newLine == false)
                {
                   line.push_back(field);
                   lines.push_back(line);
                   field.clear();
                   line.clear();
                   newLine = true;
                }
             }
             break;

          default:
             newLine = false;
             field.push_back(*aChar);
             break;
          }

          aChar++;
       }

       if (field.size())
          line.push_back(field);

       if (line.size())
          lines.push_back(line);
    }
Dort answered 19/3, 2010 at 23:18 Comment(1)
AFAICT this won't handle embedded quote marks correctly (e.g. "This string has ""embedded quote marks""","foo",1))Baltoslavic
M
16

When using the Boost Tokenizer escaped_list_separator for CSV files, then one should be aware of the following:

  1. It requires an escape-character (default back-slash - \)
  2. It requires a splitter/seperator-character (default comma - ,)
  3. It requires an quote-character (default quote - ")

The CSV format specified by wiki states that data fields can contain separators in quotes (supported):

1997,Ford,E350,"Super, luxurious truck"

The CSV format specified by wiki states that single quotes should be handled with double-quotes (escaped_list_separator will strip away all quote characters):

1997,Ford,E350,"Super ""luxurious"" truck"

The CSV format doesn't specify that any back-slash characters should be stripped away (escaped_list_separator will strip away all escape characters).

A possible work-around to fix the default behavior of the boost escaped_list_separator:

  1. First replace all back-slash characters (\) with two back-slash characters (\\) so they are not stripped away.
  2. Secondly replace all double-quotes ("") with a single back-slash character and a quote (\")

This work-around has the side-effect that empty data-fields that are represented by a double-quote, will be transformed into a single-quote-token. When iterating through the tokens, then one must check if the token is a single-quote, and treat it like an empty string.

Not pretty but it works, as long there are not newlines within the quotes.

Miscount answered 20/10, 2009 at 15:15 Comment(0)
H
15

I wrote a header-only, C++11 CSV parser. It's well tested, fast, supports the entire CSV spec (quoted fields, delimiter/terminator in quotes, quote escaping, etc.), and is configurable to account for the CSVs that don't adhere to the specification.

Configuration is done through a fluent interface:

// constructor accepts any input stream
CsvParser parser = CsvParser(std::cin)
  .delimiter(';')    // delimited by ; instead of ,
  .quote('\'')       // quoted fields use ' instead of "
  .terminator('\0'); // terminated by \0 instead of by \r\n, \n, or \r

Parsing is just a range based for loop:

#include <iostream>
#include "../parser.hpp"

using namespace aria::csv;

int main() {
  std::ifstream f("some_file.csv");
  CsvParser parser(f);

  for (auto& row : parser) {
    for (auto& field : row) {
      std::cout << field << " | ";
    }
    std::cout << std::endl;
  }
}
Hepatitis answered 29/5, 2017 at 17:59 Comment(4)
Nice work, but you need to add three more things: (1) read header (2) provide fields indexing by name (3) don't reallocate memory in loop by reusing the same vector of stringsDaren
@MaksymGanenko I do #3. Could you elaborate on #2?Hepatitis
It's very useful to get fields not by position in a row, but by name given in the header (in the first row of CSV table). For example, I expect CSV table with "Date" field, but I don't know what's "Date" field index in a row.Daren
@MaksymGanenko ah I see what you mean. There's github.com/ben-strasser/fast-cpp-csv-parser for when you know the columns of your CSV at compile time, and it's probably better than mine. What I wanted was a CSV parser for the cases where you wanted to use the same code for many different CSVs and don't know what they look like ahead of time. So I probably won't add #2, but I will add #1 sometime in the future.Hepatitis
A
11

As all the CSV questions seem to get redirected here, I thought I'd post my answer here. This answer does not directly address the asker's question. I wanted to be able to read in a stream that is known to be in CSV format, and also the types of each field was already known. Of course, the method below could be used to treat every field to be a string type.

As an example of how I wanted to be able to use a CSV input stream, consider the following input (taken from wikipedia's page on CSV):

const char input[] =
"Year,Make,Model,Description,Price\n"
"1997,Ford,E350,\"ac, abs, moon\",3000.00\n"
"1999,Chevy,\"Venture \"\"Extended Edition\"\"\",\"\",4900.00\n"
"1999,Chevy,\"Venture \"\"Extended Edition, Very Large\"\"\",\"\",5000.00\n"
"1996,Jeep,Grand Cherokee,\"MUST SELL!\n\
air, moon roof, loaded\",4799.00\n"
;

Then, I wanted to be able to read in the data like this:

std::istringstream ss(input);
std::string title[5];
int year;
std::string make, model, desc;
float price;
csv_istream(ss)
    >> title[0] >> title[1] >> title[2] >> title[3] >> title[4];
while (csv_istream(ss)
       >> year >> make >> model >> desc >> price) {
    //...do something with the record...
}

This was the solution I ended up with.

struct csv_istream {
    std::istream &is_;
    csv_istream (std::istream &is) : is_(is) {}
    void scan_ws () const {
        while (is_.good()) {
            int c = is_.peek();
            if (c != ' ' && c != '\t') break;
            is_.get();
        }
    }
    void scan (std::string *s = 0) const {
        std::string ws;
        int c = is_.get();
        if (is_.good()) {
            do {
                if (c == ',' || c == '\n') break;
                if (s) {
                    ws += c;
                    if (c != ' ' && c != '\t') {
                        *s += ws;
                        ws.clear();
                    }
                }
                c = is_.get();
            } while (is_.good());
            if (is_.eof()) is_.clear();
        }
    }
    template <typename T, bool> struct set_value {
        void operator () (std::string in, T &v) const {
            std::istringstream(in) >> v;
        }
    };
    template <typename T> struct set_value<T, true> {
        template <bool SIGNED> void convert (std::string in, T &v) const {
            if (SIGNED) v = ::strtoll(in.c_str(), 0, 0);
            else v = ::strtoull(in.c_str(), 0, 0);
        }
        void operator () (std::string in, T &v) const {
            convert<is_signed_int<T>::val>(in, v);
        }
    };
    template <typename T> const csv_istream & operator >> (T &v) const {
        std::string tmp;
        scan(&tmp);
        set_value<T, is_int<T>::val>()(tmp, v);
        return *this;
    }
    const csv_istream & operator >> (std::string &v) const {
        v.clear();
        scan_ws();
        if (is_.peek() != '"') scan(&v);
        else {
            std::string tmp;
            is_.get();
            std::getline(is_, tmp, '"');
            while (is_.peek() == '"') {
                v += tmp;
                v += is_.get();
                std::getline(is_, tmp, '"');
            }
            v += tmp;
            scan();
        }
        return *this;
    }
    template <typename T>
    const csv_istream & operator >> (T &(*manip)(T &)) const {
        is_ >> manip;
        return *this;
    }
    operator bool () const { return !is_.fail(); }
};

With the following helpers that may be simplified by the new integral traits templates in C++11:

template <typename T> struct is_signed_int { enum { val = false }; };
template <> struct is_signed_int<short> { enum { val = true}; };
template <> struct is_signed_int<int> { enum { val = true}; };
template <> struct is_signed_int<long> { enum { val = true}; };
template <> struct is_signed_int<long long> { enum { val = true}; };

template <typename T> struct is_unsigned_int { enum { val = false }; };
template <> struct is_unsigned_int<unsigned short> { enum { val = true}; };
template <> struct is_unsigned_int<unsigned int> { enum { val = true}; };
template <> struct is_unsigned_int<unsigned long> { enum { val = true}; };
template <> struct is_unsigned_int<unsigned long long> { enum { val = true}; };

template <typename T> struct is_int {
    enum { val = (is_signed_int<T>::val || is_unsigned_int<T>::val) };
};

Try it online!

Alonso answered 18/7, 2012 at 1:33 Comment(0)
B
9

You might want to look at my FOSS project CSVfix (updated link), which is a CSV stream editor written in C++. The CSV parser is no prize, but does the job and the whole package may do what you need without you writing any code.

See alib/src/a_csv.cpp for the CSV parser, and csvlib/src/csved_ioman.cpp (IOManager::ReadCSV) for a usage example.

Brenn answered 13/7, 2009 at 15:29 Comment(3)
Seems great ... What about the status beta / production ?Overpay
The status is "in development", as suggested by the version numbers. I really need more feed back from users before going to version 1.0. Plus I have a couple more features I want to add, to do with XML production from CSV.Brenn
Bookmarking it, and will give it a try next time I have to deal with those wonderful standard CSV files ...Overpay
P
8

Another CSV I/O library can be found here:

http://code.google.com/p/fast-cpp-csv-parser/

#include "csv.h"

int main(){
  io::CSVReader<3> in("ram.csv");
  in.read_header(io::ignore_extra_column, "vendor", "size", "speed");
  std::string vendor; int size; double speed;
  while(in.read_row(vendor, size, speed)){
    // do stuff with the data
  }
}
Prado answered 17/12, 2012 at 23:53 Comment(2)
Nice, but it forces you to choose the number of columns at compile time. Not very useful for many applications.Phyllida
The github link to the same repository: github.com/ben-strasser/fast-cpp-csv-parserMargalo
I
7

Another solution similar to Loki Astari's answer, in C++11. Rows here are std::tuples of a given type. The code scans one line, then scans until each delimiter, and then converts and dumps the value directly into the tuple (with a bit of template code).

for (auto row : csv<std::string, int, float>(file, ',')) {
    std::cout << "first col: " << std::get<0>(row) << std::endl;
}

Advanges:

  • quite clean and simple to use, only C++11.
  • automatic type conversion into std::tuple<t1, ...> via operator>>.

What's missing:

  • escaping and quoting
  • no error handling in case of malformed CSV.

The main code:

#include <iterator>
#include <sstream>
#include <string>

namespace csvtools {
    /// Read the last element of the tuple without calling recursively
    template <std::size_t idx, class... fields>
    typename std::enable_if<idx >= std::tuple_size<std::tuple<fields...>>::value - 1>::type
    read_tuple(std::istream &in, std::tuple<fields...> &out, const char delimiter) {
        std::string cell;
        std::getline(in, cell, delimiter);
        std::stringstream cell_stream(cell);
        cell_stream >> std::get<idx>(out);
    }

    /// Read the @p idx-th element of the tuple and then calls itself with @p idx + 1 to
    /// read the next element of the tuple. Automatically falls in the previous case when
    /// reaches the last element of the tuple thanks to enable_if
    template <std::size_t idx, class... fields>
    typename std::enable_if<idx < std::tuple_size<std::tuple<fields...>>::value - 1>::type
    read_tuple(std::istream &in, std::tuple<fields...> &out, const char delimiter) {
        std::string cell;
        std::getline(in, cell, delimiter);
        std::stringstream cell_stream(cell);
        cell_stream >> std::get<idx>(out);
        read_tuple<idx + 1, fields...>(in, out, delimiter);
    }
}

/// Iterable csv wrapper around a stream. @p fields the list of types that form up a row.
template <class... fields>
class csv {
    std::istream &_in;
    const char _delim;
public:
    typedef std::tuple<fields...> value_type;
    class iterator;

    /// Construct from a stream.
    inline csv(std::istream &in, const char delim) : _in(in), _delim(delim) {}

    /// Status of the underlying stream
    /// @{
    inline bool good() const {
        return _in.good();
    }
    inline const std::istream &underlying_stream() const {
        return _in;
    }
    /// @}

    inline iterator begin();
    inline iterator end();
private:

    /// Reads a line into a stringstream, and then reads the line into a tuple, that is returned
    inline value_type read_row() {
        std::string line;
        std::getline(_in, line);
        std::stringstream line_stream(line);
        std::tuple<fields...> retval;
        csvtools::read_tuple<0, fields...>(line_stream, retval, _delim);
        return retval;
    }
};

/// Iterator; just calls recursively @ref csv::read_row and stores the result.
template <class... fields>
class csv<fields...>::iterator {
    csv::value_type _row;
    csv *_parent;
public:
    typedef std::input_iterator_tag iterator_category;
    typedef csv::value_type         value_type;
    typedef std::size_t             difference_type;
    typedef csv::value_type *       pointer;
    typedef csv::value_type &       reference;

    /// Construct an empty/end iterator
    inline iterator() : _parent(nullptr) {}
    /// Construct an iterator at the beginning of the @p parent csv object.
    inline iterator(csv &parent) : _parent(parent.good() ? &parent : nullptr) {
        ++(*this);
    }

    /// Read one row, if possible. Set to end if parent is not good anymore.
    inline iterator &operator++() {
        if (_parent != nullptr) {
            _row = _parent->read_row();
            if (!_parent->good()) {
                _parent = nullptr;
            }
        }
        return *this;
    }

    inline iterator operator++(int) {
        iterator copy = *this;
        ++(*this);
        return copy;
    }

    inline csv::value_type const &operator*() const {
        return _row;
    }

    inline csv::value_type const *operator->() const {
        return &_row;
    }

    bool operator==(iterator const &other) {
        return (this == &other) or (_parent == nullptr and other._parent == nullptr);
    }
    bool operator!=(iterator const &other) {
        return not (*this == other);
    }
};

template <class... fields>
typename csv<fields...>::iterator csv<fields...>::begin() {
    return iterator(*this);
}

template <class... fields>
typename csv<fields...>::iterator csv<fields...>::end() {
    return iterator();
}

I put a tiny working example on GitHub; I've been using it for parsing some numerical data and it served its purpose.

Ingles answered 5/12, 2015 at 18:34 Comment(2)
You may not care about inlining, because most of compilers decide it on its own. At least I am sure in Visual C++. It can inline method independently of your method specification.Moorish
That's precisely why I marked them explicitly. Gcc and Clang, the ones I mostly use, have as well their own conventions. A "inline" keyword should be just an incentive.Ingles
P
7

You can use the header-only Csv::Parser library.

  • It fully supports RFC 4180, including quoted values, escaped quotes, and newlines in field values.
  • It requires only standard C++ (C++17).
  • It supports reading CSV data from std::string_view at compile-time.
  • It's extensively tested using Catch2.
Planarian answered 22/2, 2021 at 8:44 Comment(0)
R
6

Here is another implementation of a Unicode CSV parser (works with wchar_t). I wrote part of it, while Jonathan Leffler wrote the rest.

Note: This parser is aimed at replicating Excel's behavior as closely as possible, specifically when importing broken or malformed CSV files.

This is the original question - Parsing CSV file with multiline fields and escaped double quotes

This is the code as a SSCCE (Short, Self-Contained, Correct Example).

#include <stdbool.h>
#include <wchar.h>
#include <wctype.h>

extern const wchar_t *nextCsvField(const wchar_t *p, wchar_t sep, bool *newline);

// Returns a pointer to the start of the next field,
// or zero if this is the last field in the CSV
// p is the start position of the field
// sep is the separator used, i.e. comma or semicolon
// newline says whether the field ends with a newline or with a comma
const wchar_t *nextCsvField(const wchar_t *p, wchar_t sep, bool *newline)
{
    // Parse quoted sequences
    if ('"' == p[0]) {
        p++;
        while (1) {
            // Find next double-quote
            p = wcschr(p, L'"');
            // If we don't find it or it's the last symbol
            // then this is the last field
            if (!p || !p[1])
                return 0;
            // Check for "", it is an escaped double-quote
            if (p[1] != '"')
                break;
            // Skip the escaped double-quote
            p += 2;
        }
    }

    // Find next newline or comma.
    wchar_t newline_or_sep[4] = L"\n\r ";
    newline_or_sep[2] = sep;
    p = wcspbrk(p, newline_or_sep);

    // If no newline or separator, this is the last field.
    if (!p)
        return 0;

    // Check if we had newline.
    *newline = (p[0] == '\r' || p[0] == '\n');

    // Handle "\r\n", otherwise just increment
    if (p[0] == '\r' && p[1] == '\n')
        p += 2;
    else
        p++;

    return p;
}

static wchar_t *csvFieldData(const wchar_t *fld_s, const wchar_t *fld_e, wchar_t *buffer, size_t buflen)
{
    wchar_t *dst = buffer;
    wchar_t *end = buffer + buflen - 1;
    const wchar_t *src = fld_s;

    if (*src == L'"')
    {
        const wchar_t *p = src + 1;
        while (p < fld_e && dst < end)
        {
            if (p[0] == L'"' && p+1 < fld_s && p[1] == L'"')
            {
                *dst++ = p[0];
                p += 2;
            }
            else if (p[0] == L'"')
            {
                p++;
                break;
            }
            else
                *dst++ = *p++;
        }
        src = p;
    }
    while (src < fld_e && dst < end)
        *dst++ = *src++;
    if (dst >= end)
        return 0;
    *dst = L'\0';
    return(buffer);
}

static void dissect(const wchar_t *line)
{
    const wchar_t *start = line;
    const wchar_t *next;
    bool     eol;
    wprintf(L"Input %3zd: [%.*ls]\n", wcslen(line), wcslen(line)-1, line);
    while ((next = nextCsvField(start, L',', &eol)) != 0)
    {
        wchar_t buffer[1024];
        wprintf(L"Raw Field: [%.*ls] (eol = %d)\n", (next - start - eol), start, eol);
        if (csvFieldData(start, next-1, buffer, sizeof(buffer)/sizeof(buffer[0])) != 0)
            wprintf(L"Field %3zd: [%ls]\n", wcslen(buffer), buffer);
        start = next;
    }
}

static const wchar_t multiline[] =
   L"First field of first row,\"This field is multiline\n"
    "\n"
    "but that's OK because it's enclosed in double quotes, and this\n"
    "is an escaped \"\" double quote\" but this one \"\" is not\n"
    "   \"This is second field of second row, but it is not multiline\n"
    "   because it doesn't start \n"
    "   with an immediate double quote\"\n"
    ;

int main(void)
{
    wchar_t line[1024];

    while (fgetws(line, sizeof(line)/sizeof(line[0]), stdin))
        dissect(line);
    dissect(multiline);

    return 0;
}
Renfroe answered 15/10, 2013 at 12:51 Comment(0)
P
4

This is an old thread but its still at the top of search results, so I'm adding my solution using std::stringstream and a simple string replace method by Yves Baumes I found here.

The following example will read a file line by line, ignore comment lines starting with // and parse the other lines into a combination of strings, ints and doubles. Stringstream does the parsing, but expects fields to be delimited by whitespace, so I use stringreplace to turn commas into spaces first. It handles tabs ok, but doesn't deal with quoted strings.

Bad or missing input is simply ignored, which may or may not be good, depending on your circumstance.

#include <string>
#include <sstream>
#include <fstream>

void StringReplace(std::string& str, const std::string& oldStr, const std::string& newStr)
// code by  Yves Baumes
// https://mcmap.net/q/100524/-how-do-i-search-find-and-replace-in-a-standard-string
{
  size_t pos = 0;
  while((pos = str.find(oldStr, pos)) != std::string::npos)
  {
     str.replace(pos, oldStr.length(), newStr);
     pos += newStr.length();
  }
}

void LoadCSV(std::string &filename) {
   std::ifstream stream(filename);
   std::string in_line;
   std::string Field;
   std::string Chan;
   int ChanType;
   double Scale;
   int Import;
   while (std::getline(stream, in_line)) {
      StringReplace(in_line, ",", " ");
      std::stringstream line(in_line);
      line >> Field >> Chan >> ChanType >> Scale >> Import;
      if (Field.substr(0,2)!="//") {
         // do your stuff 
         // this is CBuilder code for demonstration, sorry
         ShowMessage((String)Field.c_str() + "\n" + Chan.c_str() + "\n" + IntToStr(ChanType) + "\n" +FloatToStr(Scale) + "\n" +IntToStr(Import));
      }
   }
}
Peti answered 26/6, 2013 at 19:30 Comment(0)
C
4

I needed an easy-to-use C++ library for parsing CSV files but couldn't find any available, so I ended up building one. Rapidcsv is a C++11 header-only library which gives direct access to parsed columns (or rows) as vectors, in datatype of choice. For example:

#include <iostream>
#include <vector>
#include <rapidcsv.h>

int main()
{
  rapidcsv::Document doc("../tests/msft.csv");

  std::vector<float> close = doc.GetColumn<float>("Close");
  std::cout << "Read " << close.size() << " values." << std::endl;
}
Cliffordclift answered 29/5, 2017 at 14:2 Comment(2)
Nice work, but the library doesn't work properly if header has empty labels. That's typical for Excel/LibreOffice NxN table. Also, it may skip the last line of data. Unfortunately, your lib is not robust.Daren
Thanks for the feedback @MaksymGanenko I've fixed the "last line of data" bug for final lines w/o trailing line break. As for the other issue mentioned - "headers with empty labels" - I'm not sure what it refers to? The library should handle empty labels (both quoted and non-quoted). It can also read CSV without header row/column, but then it requires the user to specify this (col title id -1 and row title id -1). Please provide some more details or report a bug at the GitHub page if you have some specific use-case you'd like to see supported. Thanks!Cliffordclift
A
3

Here is code for reading a matrix, note you also have a csvwrite function in matlab

void loadFromCSV( const std::string& filename )
{
    std::ifstream       file( filename.c_str() );
    std::vector< std::vector<std::string> >   matrix;
    std::vector<std::string>   row;
    std::string                line;
    std::string                cell;

    while( file )
    {
        std::getline(file,line);
        std::stringstream lineStream(line);
        row.clear();

        while( std::getline( lineStream, cell, ',' ) )
            row.push_back( cell );

        if( !row.empty() )
            matrix.push_back( row );
    }

    for( int i=0; i<int(matrix.size()); i++ )
    {
        for( int j=0; j<int(matrix[i].size()); j++ )
            std::cout << matrix[i][j] << " ";

        std::cout << std::endl;
    }
}
Afterward answered 19/2, 2013 at 0:37 Comment(0)
C
3

You gotta feel proud when you use something so beautiful as boost::spirit

Here my attempt of a parser (almost) complying with the CSV specifications on this link CSV specs (I didn't need line breaks within fields. Also the spaces around the commas are dismissed).

After you overcome the shocking experience of waiting 10 seconds for compiling this code :), you can sit back and enjoy.

// csvparser.cpp
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix_operator.hpp>

#include <iostream>
#include <string>

namespace qi = boost::spirit::qi;
namespace bascii = boost::spirit::ascii;

template <typename Iterator>
struct csv_parser : qi::grammar<Iterator, std::vector<std::string>(), 
    bascii::space_type>
{
    qi::rule<Iterator, char()                                           > COMMA;
    qi::rule<Iterator, char()                                           > DDQUOTE;
    qi::rule<Iterator, std::string(),               bascii::space_type  > non_escaped;
    qi::rule<Iterator, std::string(),               bascii::space_type  > escaped;
    qi::rule<Iterator, std::string(),               bascii::space_type  > field;
    qi::rule<Iterator, std::vector<std::string>(),  bascii::space_type  > start;

    csv_parser() : csv_parser::base_type(start)
    {
        using namespace qi;
        using qi::lit;
        using qi::lexeme;
        using bascii::char_;

        start       = field % ',';
        field       = escaped | non_escaped;
        escaped     = lexeme['"' >> *( char_ -(char_('"') | ',') | COMMA | DDQUOTE)  >> '"'];
        non_escaped = lexeme[       *( char_ -(char_('"') | ',')                  )        ];
        DDQUOTE     = lit("\"\"")       [_val = '"'];
        COMMA       = lit(",")          [_val = ','];
    }

};

int main()
{
    std::cout << "Enter CSV lines [empty] to quit\n";

    using bascii::space;
    typedef std::string::const_iterator iterator_type;
    typedef csv_parser<iterator_type> csv_parser;

    csv_parser grammar;
    std::string str;
    int fid;
    while (getline(std::cin, str))
    {
        fid = 0;

        if (str.empty())
            break;

        std::vector<std::string> csv;
        std::string::const_iterator it_beg = str.begin();
        std::string::const_iterator it_end = str.end();
        bool r = phrase_parse(it_beg, it_end, grammar, space, csv);

        if (r && it_beg == it_end)
        {
            std::cout << "Parsing succeeded\n";
            for (auto& field: csv)
            {
                std::cout << "field " << ++fid << ": " << field << std::endl;
            }
        }
        else
        {
            std::cout << "Parsing failed\n";
        }
    }

    return 0;
}

Compile:

make csvparser

Test (example stolen from Wikipedia):

./csvparser
Enter CSV lines [empty] to quit

1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
Parsing succeeded
field 1: 1999
field 2: Chevy
field 3: Venture "Extended Edition, Very Large"
field 4: 
field 5: 5000.00

1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00"
Parsing failed
Cadastre answered 31/7, 2017 at 14:1 Comment(0)
D
3

This solution detects these 4 cases

complete class is at

https://github.com/pedro-vicente/csv-parser

1,field 2,field 3,
1,field 2,"field 3 quoted, with separator",
1,field 2,"field 3
with newline",
1,field 2,"field 3
with newline and separator,",

It reads the file character by character, and reads 1 row at a time to a vector (of strings), therefore suitable for very large files.

Usage is

Iterate until an empty row is returned (end of file). A row is a vector where each entry is a CSV column.

read_csv_t csv;
csv.open("../test.csv");
std::vector<std::string> row;
while (true)
{
  row = csv.read_row();
  if (row.size() == 0)
  {
    break;
  }
}

the class declaration

class read_csv_t
{
public:
  read_csv_t();
  int open(const std::string &file_name);
  std::vector<std::string> read_row();
private:
  std::ifstream m_ifs;
};

the implementation

std::vector<std::string> read_csv_t::read_row()
{
  bool quote_mode = false;
  std::vector<std::string> row;
  std::string column;
  char c;
  while (m_ifs.get(c))
  {
    switch (c)
    {
      /////////////////////////////////////////////////////////////////////////////////////////////////////
      //separator ',' detected. 
      //in quote mode add character to column
      //push column if not in quote mode
      /////////////////////////////////////////////////////////////////////////////////////////////////////

    case ',':
      if (quote_mode == true)
      {
        column += c;
      }
      else
      {
        row.push_back(column);
        column.clear();
      }
      break;

      /////////////////////////////////////////////////////////////////////////////////////////////////////
      //quote '"' detected. 
      //toggle quote mode
      /////////////////////////////////////////////////////////////////////////////////////////////////////

    case '"':
      quote_mode = !quote_mode;
      break;

      /////////////////////////////////////////////////////////////////////////////////////////////////////
      //line end detected
      //in quote mode add character to column
      //return row if not in quote mode
      /////////////////////////////////////////////////////////////////////////////////////////////////////

    case '\n':
    case '\r':
      if (quote_mode == true)
      {
        column += c;
      }
      else
      {
        return row;
      }
      break;

      /////////////////////////////////////////////////////////////////////////////////////////////////////
      //default, add character to column
      /////////////////////////////////////////////////////////////////////////////////////////////////////

    default:
      column += c;
      break;
    }
  }

  //return empty vector if end of file detected 
  m_ifs.close();
  std::vector<std::string> v;
  return v;
}
Directorial answered 1/9, 2017 at 3:47 Comment(0)
D
2

Another quick and easy way is to use Boost.Fusion I/O:

#include <iostream>
#include <sstream>

#include <boost/fusion/adapted/boost_tuple.hpp>
#include <boost/fusion/sequence/io.hpp>

namespace fusion = boost::fusion;

struct CsvString
{
    std::string value;

    // Stop reading a string once a CSV delimeter is encountered.
    friend std::istream& operator>>(std::istream& s, CsvString& v) {
        v.value.clear();
        for(;;) {
            auto c = s.peek();
            if(std::istream::traits_type::eof() == c || ',' == c || '\n' == c)
                break;
            v.value.push_back(c);
            s.get();
        }
        return s;
    }

    friend std::ostream& operator<<(std::ostream& s, CsvString const& v) {
        return s << v.value;
    }
};

int main() {
    std::stringstream input("abc,123,true,3.14\n"
                            "def,456,false,2.718\n");

    typedef boost::tuple<CsvString, int, bool, double> CsvRow;

    using fusion::operator<<;
    std::cout << std::boolalpha;

    using fusion::operator>>;
    input >> std::boolalpha;
    input >> fusion::tuple_open("") >> fusion::tuple_close("\n") >> fusion::tuple_delimiter(',');

    for(CsvRow row; input >> row;)
        std::cout << row << '\n';
}

Outputs:

(abc 123 true 3.14)
(def 456 false 2.718)
Dufour answered 3/7, 2014 at 9:7 Comment(0)
P
2

The first thing you need to do is make sure the file exists. To accomplish this you just need to try and open the file stream at the path. After you have opened the file stream use stream.fail() to see if it worked as expected, or not.

bool fileExists(string fileName)
{

ifstream test;

test.open(fileName.c_str());

if (test.fail())
{
    test.close();
    return false;
}
else
{
    test.close();
    return true;
}
}

You must also verify that the file provided is the correct type of file. To accomplish this you need to look through the file path provided until you find the file extension. Once you have the file extension make sure that it is a .csv file.

bool verifyExtension(string filename)
{
int period = 0;

for (unsigned int i = 0; i < filename.length(); i++)
{
    if (filename[i] == '.')
        period = i;
}

string extension;

for (unsigned int i = period; i < filename.length(); i++)
    extension += filename[i];

if (extension == ".csv")
    return true;
else
    return false;
}

This function will return the file extension which is used later in an error message.

string getExtension(string filename)
{
int period = 0;

for (unsigned int i = 0; i < filename.length(); i++)
{
    if (filename[i] == '.')
        period = i;
}

string extension;

if (period != 0)
{
    for (unsigned int i = period; i < filename.length(); i++)
        extension += filename[i];
}
else
    extension = "NO FILE";

return extension;
}

This function will actually call the error checks created above and then parse through the file.

void parseFile(string fileName)
{
    if (fileExists(fileName) && verifyExtension(fileName))
    {
        ifstream fs;
        fs.open(fileName.c_str());
        string fileCommand;

        while (fs.good())
        {
            string temp;

            getline(fs, fileCommand, '\n');

            for (unsigned int i = 0; i < fileCommand.length(); i++)
            {
                if (fileCommand[i] != ',')
                    temp += fileCommand[i];
                else
                    temp += " ";
            }

            if (temp != "\0")
            {
                // Place your code here to run the file.
            }
        }
        fs.close();
    }
    else if (!fileExists(fileName))
    {
        cout << "Error: The provided file does not exist: " << fileName << endl;

        if (!verifyExtension(fileName))
        {
            if (getExtension(fileName) != "NO FILE")
                cout << "\tCheck the file extension." << endl;
            else
                cout << "\tThere is no file in the provided path." << endl;
        }
    }
    else if (!verifyExtension(fileName)) 
    {
        if (getExtension(fileName) != "NO FILE")
            cout << "Incorrect file extension provided: " << getExtension(fileName) << endl;
        else
            cout << "There is no file in the following path: " << fileName << endl;
    }
}
Putrescent answered 6/10, 2015 at 20:52 Comment(0)
T
2

Since i'm not used to boost right now, I will suggest a more simple solution. Lets suppose that your .csv file has 100 lines with 10 numbers in each line separated by a ','. You could load this data in the form of an array with the following code:

#include <iostream>
#include <fstream>
#include <sstream>
#include <string>
using namespace std;

int main()
{
    int A[100][10];
    ifstream ifs;
    ifs.open("name_of_file.csv");
    string s1;
    char c;
    for(int k=0; k<100; k++)
    {
        getline(ifs,s1);
        stringstream stream(s1);
        int j=0;
        while(1)
        {
            stream >>A[k][j];
            stream >> c;
            j++;
            if(!stream) {break;}
        }
    }


}
Tailstock answered 16/10, 2016 at 12:21 Comment(0)
P
2

You can use this library: https://github.com/vadamsky/csvworker

Code for example:

#include <iostream>
#include "csvworker.h"

using namespace std;

int main()
{
    //
    CsvWorker csv;
    csv.loadFromFile("example.csv");
    cout << csv.getRowsNumber() << "  " << csv.getColumnsNumber() << endl;

    csv.getFieldRef(0, 2) = "0";
    csv.getFieldRef(1, 1) = "0";
    csv.getFieldRef(1, 3) = "0";
    csv.getFieldRef(2, 0) = "0";
    csv.getFieldRef(2, 4) = "0";
    csv.getFieldRef(3, 1) = "0";
    csv.getFieldRef(3, 3) = "0";
    csv.getFieldRef(4, 2) = "0";

    for(unsigned int i=0;i<csv.getRowsNumber();++i)
    {
        //cout << csv.getRow(i) << endl;
        for(unsigned int j=0;j<csv.getColumnsNumber();++j)
        {
            cout << csv.getField(i, j) << ".";
        }
        cout << endl;
    }

    csv.saveToFile("test.csv");

    //
    CsvWorker csv2(4,4);

    csv2.getFieldRef(0, 0) = "a";
    csv2.getFieldRef(0, 1) = "b";
    csv2.getFieldRef(0, 2) = "r";
    csv2.getFieldRef(0, 3) = "a";
    csv2.getFieldRef(1, 0) = "c";
    csv2.getFieldRef(1, 1) = "a";
    csv2.getFieldRef(1, 2) = "d";
    csv2.getFieldRef(2, 0) = "a";
    csv2.getFieldRef(2, 1) = "b";
    csv2.getFieldRef(2, 2) = "r";
    csv2.getFieldRef(2, 3) = "a";

    csv2.saveToFile("test2.csv");

    return 0;
}
Panorama answered 29/6, 2017 at 23:51 Comment(2)
Another interesting library is github.com/roman-kashitsyn/text-csvSaldana
I got an error showing: "binary '<<' : no operator found which takes a right-hand operand of type 'row' (or there is no acceptable conversion)" Is there any solution for it?Nuthouse
S
2

Parsing CSV file lines with Stream

I wrote a small example of parsing CSV file lines, it can be developed with for and while loops if desired:

#include <iostream>
#include <fstream>
#include <string.h>

using namespace std;

int main() {


ifstream fin("Infile.csv");
ofstream fout("OutFile.csv");
string strline, strremain, strCol1 , strout;

string delimeter =";";

int d1;

to continue until the end of the file:

while (!fin.eof()){ 

get first line from InFile :

    getline(fin,strline,'\n');      

find delimeter position in line:

    d1 = strline.find(';');

and parse first column:

    strCol1 = strline.substr(0,d1); // parse first Column
    d1++;
    strremain = strline.substr(d1); // remaining line

create output line in CSV format:

    strout.append(strCol1);
    strout.append(delimeter);

write line to Out File:

    fout << strout << endl; //out file line

} 

fin.close();
fout.close();

return(0);
}

This code is compiled and running. Good luck!

Salify answered 24/1, 2020 at 9:16 Comment(0)
C
1

Excuse me, but this all seems like a great deal of elaborate syntax to hide a few lines of code.

Why not this:

/**

  Read line from a CSV file

  @param[in] fp file pointer to open file
  @param[in] vls reference to vector of strings to hold next line

  */
void readCSV( FILE *fp, std::vector<std::string>& vls )
{
    vls.clear();
    if( ! fp )
        return;
    char buf[10000];
    if( ! fgets( buf,999,fp) )
        return;
    std::string s = buf;
    int p,q;
    q = -1;
    // loop over columns
    while( 1 ) {
        p = q;
        q = s.find_first_of(",\n",p+1);
        if( q == -1 ) 
            break;
        vls.push_back( s.substr(p+1,q-p-1) );
    }
}

int _tmain(int argc, _TCHAR* argv[])
{
    std::vector<std::string> vls;
    FILE * fp = fopen( argv[1], "r" );
    if( ! fp )
        return 1;
    readCSV( fp, vls );
    readCSV( fp, vls );
    readCSV( fp, vls );
    std::cout << "row 3, col 4 is " << vls[3].c_str() << "\n";

    return 0;
}
Crossing answered 14/7, 2009 at 14:39 Comment(2)
Erm, why would there be ",\n" in the string?Commination
@Commination look up the substr method of the String class, and you'll see that it takes multiple characters, \n is the newline character, so it counts as a single character, in this instance. It doesn't search for the entire value as a whole. It's searching for each individual character; namely comma or newline. substr will return the position of the first character it finds, and -1 if it finds neither, which means it's finished reading the line. fp keeps track of the position in the file internally, so each call to readCSV moves it one row at a time.Auspex
P
1

You could also take a look at capabilities of Qt library.

It has regular expressions support and QString class has nice methods, e.g. split() returning QStringList, list of strings obtained by splitting the original string with a provided delimiter. Should suffice for csv file..

To get a column with a given header name I use following: c++ inheritance Qt problem qstring

Phiona answered 18/9, 2009 at 10:28 Comment(1)
this won't handle commas in quotesDiversified
M
1

If you don't want to deal with including boost in your project (it is considerably large if all you are going to use it for is CSV parsing...)

I have had luck with the CSV parsing here:

http://www.zedwood.com/article/112/cpp-csv-parser

It handles quoted fields - but does not handle inline \n characters (which is probably fine for most uses).

Masteratarms answered 29/4, 2011 at 19:12 Comment(1)
Shouldn't the compiler strip out everything that is non-essential?Slicer
C
1

For what it is worth, here is my implementation. It deals with wstring input, but could be adjusted to string easily. It does not handle newline in fields (as my application does not either, but adding its support isn't too difficult) and it does not comply with "\r\n" end of line as per RFC (assuming you use std::getline), but it does handle whitespace trimming and double-quotes correctly (hopefully).

using namespace std;

// trim whitespaces around field or double-quotes, remove double-quotes and replace escaped double-quotes (double double-quotes)
wstring trimquote(const wstring& str, const wstring& whitespace, const wchar_t quotChar)
{
    wstring ws;
    wstring::size_type strBegin = str.find_first_not_of(whitespace);
    if (strBegin == wstring::npos)
        return L"";

    wstring::size_type strEnd = str.find_last_not_of(whitespace);
    wstring::size_type strRange = strEnd - strBegin + 1;

    if((str[strBegin] == quotChar) && (str[strEnd] == quotChar))
    {
        ws = str.substr(strBegin+1, strRange-2);
        strBegin = 0;
        while((strEnd = ws.find(quotChar, strBegin)) != wstring::npos)
        {
            ws.erase(strEnd, 1);
            strBegin = strEnd+1;
        }

    }
    else
        ws = str.substr(strBegin, strRange);
    return ws;
}

pair<unsigned, unsigned> nextCSVQuotePair(const wstring& line, const wchar_t quotChar, unsigned ofs = 0)
{
    pair<unsigned, unsigned> r;
    r.first = line.find(quotChar, ofs);
    r.second = wstring::npos;
    if(r.first != wstring::npos)
    {
        r.second = r.first;
        while(((r.second = line.find(quotChar, r.second+1)) != wstring::npos)
            && (line[r.second+1] == quotChar)) // WARNING: assumes null-terminated string such that line[r.second+1] always exist
            r.second++;

    }
    return r;
}

unsigned parseLine(vector<wstring>& fields, const wstring& line)
{
    unsigned ofs, ofs0, np;
    const wchar_t delim = L',';
    const wstring whitespace = L" \t\xa0\x3000\x2000\x2001\x2002\x2003\x2004\x2005\x2006\x2007\x2008\x2009\x200a\x202f\x205f";
    const wchar_t quotChar = L'\"';
    pair<unsigned, unsigned> quot;

    fields.clear();

    ofs = ofs0 = 0;
    quot = nextCSVQuotePair(line, quotChar);
    while((np = line.find(delim, ofs)) != wstring::npos)
    {
        if((np > quot.first) && (np < quot.second))
        { // skip delimiter inside quoted field
            ofs = quot.second+1;
            quot = nextCSVQuotePair(line, quotChar, ofs);
            continue;
        }
        fields.push_back( trimquote(line.substr(ofs0, np-ofs0), whitespace, quotChar) );
        ofs = ofs0 = np+1;
    }
    fields.push_back( trimquote(line.substr(ofs0), whitespace, quotChar) );

    return fields.size();
}
Catechetical answered 19/7, 2013 at 3:45 Comment(0)
P
1

Here is a ready-to use function if all you need is to load a data file of doubles (no integers, no text).

#include <sstream>
#include <fstream>
#include <iterator>
#include <string>
#include <vector>
#include <algorithm>

using namespace std;

/**
 * Parse a CSV data file and fill the 2d STL vector "data".
 * Limits: only "pure datas" of doubles, not encapsulated by " and without \n inside.
 * Further no formatting in the data (e.g. scientific notation)
 * It however handles both dots and commas as decimal separators and removes thousand separator.
 * 
 * returnCodes[0]: file access 0-> ok 1-> not able to read; 2-> decimal separator equal to comma separator
 * returnCodes[1]: number of records
 * returnCodes[2]: number of fields. -1 If rows have different field size
 * 
 */
vector<int>
readCsvData (vector <vector <double>>& data, const string& filename, const string& delimiter, const string& decseparator){

 int vv[3] = { 0,0,0 };
 vector<int> returnCodes(&vv[0], &vv[0]+3);

 string rowstring, stringtoken;
 double doubletoken;
 int rowcount=0;
 int fieldcount=0;
 data.clear();

 ifstream iFile(filename, ios_base::in);
 if (!iFile.is_open()){
   returnCodes[0] = 1;
   return returnCodes;
 }
 while (getline(iFile, rowstring)) {
    if (rowstring=="") continue; // empty line
    rowcount ++; //let's start with 1
    if(delimiter == decseparator){
      returnCodes[0] = 2;
      return returnCodes;
    }
    if(decseparator != "."){
     // remove dots (used as thousand separators)
     string::iterator end_pos = remove(rowstring.begin(), rowstring.end(), '.');
     rowstring.erase(end_pos, rowstring.end());
     // replace decimal separator with dots.
     replace(rowstring.begin(), rowstring.end(),decseparator.c_str()[0], '.'); 
    } else {
     // remove commas (used as thousand separators)
     string::iterator end_pos = remove(rowstring.begin(), rowstring.end(), ',');
     rowstring.erase(end_pos, rowstring.end());
    }
    // tokenize..
    vector<double> tokens;
    // Skip delimiters at beginning.
    string::size_type lastPos = rowstring.find_first_not_of(delimiter, 0);
    // Find first "non-delimiter".
    string::size_type pos     = rowstring.find_first_of(delimiter, lastPos);
    while (string::npos != pos || string::npos != lastPos){
        // Found a token, convert it to double add it to the vector.
        stringtoken = rowstring.substr(lastPos, pos - lastPos);
        if (stringtoken == "") {
      tokens.push_back(0.0);
    } else {
          istringstream totalSString(stringtoken);
      totalSString >> doubletoken;
      tokens.push_back(doubletoken);
    }     
        // Skip delimiters.  Note the "not_of"
        lastPos = rowstring.find_first_not_of(delimiter, pos);
        // Find next "non-delimiter"
        pos = rowstring.find_first_of(delimiter, lastPos);
    }
    if(rowcount == 1){
      fieldcount = tokens.size();
      returnCodes[2] = tokens.size();
    } else {
      if ( tokens.size() != fieldcount){
    returnCodes[2] = -1;
      }
    }
    data.push_back(tokens);
 }
 iFile.close();
 returnCodes[1] = rowcount;
 return returnCodes;
}
Paint answered 24/1, 2014 at 13:2 Comment(0)
P
1

You can open and read .csv file using fopen ,fscanf functions ,but the important thing is to parse the data.Simplest way to parse the data using delimiter.In case of .csv , delimiter is ','.

Suppose your data1.csv file is as follows :

A,45,76,01
B,77,67,02
C,63,76,03
D,65,44,04

you can tokenize data and store in char array and later use atoi() etc function for appropriate conversions

FILE *fp;
char str1[10], str2[10], str3[10], str4[10];

fp = fopen("G:\\data1.csv", "r");
if(NULL == fp)
{
    printf("\nError in opening file.");
    return 0;
}
while(EOF != fscanf(fp, " %[^,], %[^,], %[^,], %s, %s, %s, %s ", str1, str2, str3, str4))
{
    printf("\n%s %s %s %s", str1, str2, str3, str4);
}
fclose(fp);

[^,], ^ -it inverts logic , means match any string that does not contain comma then last , says to match comma that terminated previous string.

Pneumatophore answered 10/11, 2014 at 11:38 Comment(0)
E
1

I wrote a nice way of parsing CSV files and I thought I should add it as an answer:

#include <algorithm>
#include <fstream>
#include <iostream>
#include <stdlib.h>
#include <stdio.h>

struct CSVDict
{
  std::vector< std::string > inputImages;
  std::vector< double > inputLabels;
};

/**
\brief Splits the string

\param str String to split
\param delim Delimiter on the basis of which splitting is to be done
\return results Output in the form of vector of strings
*/
std::vector<std::string> stringSplit( const std::string &str, const std::string &delim )
{
  std::vector<std::string> results;

  for (size_t i = 0; i < str.length(); i++)
  {
    std::string tempString = "";
    while ((str[i] != *delim.c_str()) && (i < str.length()))
    {
      tempString += str[i];
      i++;
    }
    results.push_back(tempString);
  }

  return results;
}

/**
\brief Parse the supplied CSV File and obtain Row and Column information. 

Assumptions:
1. Header information is in first row
2. Delimiters are only used to differentiate cell members

\param csvFileName The full path of the file to parse
\param inputColumns The string of input columns which contain the data to be used for further processing
\param inputLabels The string of input labels based on which further processing is to be done
\param delim The delimiters used in inputColumns and inputLabels
\return Vector of Vector of strings: Collection of rows and columns
*/
std::vector< CSVDict > parseCSVFile( const std::string &csvFileName, const std::string &inputColumns, const std::string &inputLabels, const std::string &delim )
{
  std::vector< CSVDict > return_CSVDict;
  std::vector< std::string > inputColumnsVec = stringSplit(inputColumns, delim), inputLabelsVec = stringSplit(inputLabels, delim);
  std::vector< std::vector< std::string > > returnVector;
  std::ifstream inFile(csvFileName.c_str());
  int row = 0;
  std::vector< size_t > inputColumnIndeces, inputLabelIndeces;
  for (std::string line; std::getline(inFile, line, '\n');)
  {
    CSVDict tempDict;
    std::vector< std::string > rowVec;
    line.erase(std::remove(line.begin(), line.end(), '"'), line.end());
    rowVec = stringSplit(line, delim);

    // for the first row, record the indeces of the inputColumns and inputLabels
    if (row == 0)
    {
      for (size_t i = 0; i < rowVec.size(); i++)
      {
        for (size_t j = 0; j < inputColumnsVec.size(); j++)
        {
          if (rowVec[i] == inputColumnsVec[j])
          {
            inputColumnIndeces.push_back(i);
          }
        }
        for (size_t j = 0; j < inputLabelsVec.size(); j++)
        {
          if (rowVec[i] == inputLabelsVec[j])
          {
            inputLabelIndeces.push_back(i);
          }
        }
      }
    }
    else
    {
      for (size_t i = 0; i < inputColumnIndeces.size(); i++)
      {
        tempDict.inputImages.push_back(rowVec[inputColumnIndeces[i]]);
      }
      for (size_t i = 0; i < inputLabelIndeces.size(); i++)
      {
        double test = std::atof(rowVec[inputLabelIndeces[i]].c_str());
        tempDict.inputLabels.push_back(std::atof(rowVec[inputLabelIndeces[i]].c_str()));
      }
      return_CSVDict.push_back(tempDict);
    }
    row++;
  }

  return return_CSVDict;
}
Energetics answered 18/11, 2015 at 13:36 Comment(0)
R
1

It is possible to use std::regex .

Depending on the size of your file and the memory available to you , it is possible read it either line by line or entirely in an std::string.

To read the file one can use :

std::ifstream t("file.txt");
std::string sin((std::istreambuf_iterator<char>(t)),
                 std::istreambuf_iterator<char>());

then you can match with this which is actually customizable to your needs.

std::regex word_regex(",\\s]+");
auto what = 
    std::sregex_iterator(sin.begin(), sin.end(), word_regex);
auto wend = std::sregex_iterator();

std::vector<std::string> v;
for (;what!=wend ; wend) {
    std::smatch match = *what;
    v.push_back(match.str());
}
Rothwell answered 2/12, 2015 at 10:57 Comment(0)
D
1

My simple and fast contribution, if I may. No Boost.

Accepts delimiters and delimiters within delimiters, as long as in pairs or away from a separator.

#include <iostream>
#include <vector>
#include <fstream>

std::vector<std::string> SplitCSV(const std::string &data, char separator, char delimiter)
{
  std::vector<std::string> Values;
  std::string Val = "";
  bool VDel = false; // Is within delimiter?
  size_t CDel = 0; // Delimiters counter within delimiters.
  const char *C = data.c_str();
  size_t P = 0;
  do
  {
    if ((Val.length() == 0) && (C[P] == delimiter))
    {
      VDel = !VDel;
      CDel = 0;
      P++;
      continue;
    }
    if (VDel)
    {
      if (C[P] == delimiter)
      {
        if (((CDel % 2) == 0) && ( (C[P+1] == separator) || (C[P+1] == 0) || (C[P+1] == '\n') || (C[P+1] == '\r') ))
        {
          VDel = false;
          CDel = 0;
          P++;
          continue;
        }
        else
          CDel++;
      }
    }
    else
    {
      if (C[P] == separator)
      {
        Values.push_back(Val);
        Val = "";
        P++;
        continue;
      }
      if ((C[P] == 0) || (C[P] == '\n') || (C[P] == '\r'))
        break;
    }
    Val += C[P];
    P++;
  } while(P < data.length());
  Values.push_back(Val);
  return Values;
}

bool ReadCsv(const std::string &fname, std::vector<std::vector<std::string>> &data,
  char separator = ',', char delimiter = '\"')
{
  bool Ret = false;
  std::ifstream FCsv(fname);
  if (FCsv)
  {
    FCsv.seekg(0, FCsv.end);
    size_t Siz = FCsv.tellg();
    if (Siz > 0)
    {
      FCsv.seekg(0);
      data.clear();
      std::string Line;
      while (getline(FCsv, Line, '\n'))
        data.push_back(SplitCSV(Line, separator, delimiter));
      Ret = true;
    }
    FCsv.close();
  }
  return Ret;
}

int main(int argc, char *argv[])
{
  std::vector<std::vector<std::string>> Data;
  ReadCsv("fsample.csv", Data);
  return 0;
}
Doom answered 23/5, 2022 at 16:54 Comment(0)
S
0

If you're using Visual Studio / MFC, the following solution may make your life easier. It supports both Unicode and MBCS, has comments, doesn't have dependencies other than CString, and works well enough for me. It doesn't support line breaks embedded within a quoted string, but I don't care so long as it doesn't crash in that case, which it doesn't.

The overall strategy is, handle quoted and empty strings as special cases, and use Tokenize for the rest. For quoted strings, the strategy is, find the real closing quote, keeping track of whether pairs of consecutive quotes were encountered. If they were, use Replace to convert the pairs to singles. No doubt there are more efficient methods but performance wasn't sufficiently critical in my case to justify further optimization.

class CParseCSV {
public:
// Construction
    CParseCSV(const CString& sLine);

// Attributes
    bool    GetString(CString& sDest);

protected:
    CString m_sLine;    // line to extract tokens from
    int     m_nLen;     // line length in characters
    int     m_iPos;     // index of current position
};

CParseCSV::CParseCSV(const CString& sLine) : m_sLine(sLine)
{
    m_nLen = m_sLine.GetLength();
    m_iPos = 0;
}

bool CParseCSV::GetString(CString& sDest)
{
    if (m_iPos < 0 || m_iPos > m_nLen)  // if position out of range
        return false;
    if (m_iPos == m_nLen) { // if at end of string
        sDest.Empty();  // return empty token
        m_iPos = -1;    // really done now
        return true;
    }
    if (m_sLine[m_iPos] == '\"') {  // if current char is double quote
        m_iPos++;   // advance to next char
        int iTokenStart = m_iPos;
        bool    bHasEmbeddedQuotes = false;
        while (m_iPos < m_nLen) {   // while more chars to parse
            if (m_sLine[m_iPos] == '\"') {  // if current char is double quote
                // if next char exists and is also double quote
                if (m_iPos < m_nLen - 1 && m_sLine[m_iPos + 1] == '\"') {
                    // found pair of consecutive double quotes
                    bHasEmbeddedQuotes = true;  // request conversion
                    m_iPos++;   // skip first quote in pair
                } else  // next char doesn't exist or is normal
                    break;  // found closing quote; exit loop
            }
            m_iPos++;   // advance to next char
        }
        sDest = m_sLine.Mid(iTokenStart, m_iPos - iTokenStart);
        if (bHasEmbeddedQuotes) // if string contains embedded quote pairs
            sDest.Replace(_T("\"\""), _T("\""));    // convert pairs to singles
        m_iPos += 2;    // skip closing quote and trailing delimiter if any
    } else if (m_sLine[m_iPos] == ',') {    // else if char is comma
        sDest.Empty();  // return empty token
        m_iPos++;   // advance to next char
    } else {    // else get next comma-delimited token
        sDest = m_sLine.Tokenize(_T(","), m_iPos);
    }
    return true;
}

// calling code should look something like this:

    CStdioFile  fIn(pszPath, CFile::modeRead);
    CString sLine, sToken;
    while (fIn.ReadString(sLine)) { // for each line of input file
        if (!sLine.IsEmpty()) { // ignore blank lines
            CParseCSV   csv(sLine);
            while (csv.GetString(sToken)) {
                // do something with sToken here
            }
        }
    }
Sprat answered 19/12, 2018 at 6:49 Comment(0)
C
0

I've got a way quicker solution, was originally intended for this question:

How to pull specific part of different strings?

But it was closed obviously. I'm not about to throw this away though:

#include <iostream>
#include <string>
#include <regex>

std::string text = "\"4,\"\"3\"\",\"\"Mon May 11 03:17:40 UTC 2009\"\",\"\"kindle2\"\",\"\"tpryan\"\",\"\"TEXT HERE\"\"\";;;;";

int main()
{
    std::regex r("(\".*\")(\".*\")(\".*\")(\".*\")(\".*\")(\".*\")(\".*\")(\".*\")(\".*\")(\".*\")");
    std::smatch m;
    std::regex_search(text, m, r);
    std::cout<<"FOUND: "<<m[9]<<std::endl;

    return 0;
}

Just pick out whichever match you want from the smatch collection by index. Regex is bliss.

Cappadocia answered 19/12, 2018 at 21:36 Comment(1)
A Haikou that springs to mind: You have a problem. You solve it with a reg ex. Now there's two problems. :-DRelly
S
0

Like everyone puts his solution, here is mine using template, lambda and tuple.

It can convert any CSV with wanted columns to a C++ vector of tuple.

It works by defining each CSV line element type in a tuple.

You also need to define the std::string to type conversion Formatter lambda for each element (using std::atod for example).

Then you got a vector of this struct corresponding to your CSV data.

You can reuse this easily to match any CSV structure.

StringsHelpers.hpp

#include <string>
#include <fstream>
#include <vector>
#include <functional>

namespace StringHelpers
{
    template<typename Tuple>
    using Formatter = std::function<Tuple(const std::vector<std::string> &)>;

    std::vector<std::string> split(const std::string &string, const std::string &delimiter);

    template<typename Tuple>
    std::vector<Tuple> readCsv(const std::string &path, const std::string &delimiter, Formatter<Tuple> formatter);
};

StringsHelpers.cpp

#include "StringHelpers.hpp"

namespace StringHelpers
{
    /**
     * Split a string with the given delimiter into several strings
     *
     * @param string - The string to extract the substrings from
     * @param delimiter - The substrings delimiter
     *
     * @return The substrings
     */
    std::vector<std::string> split(const std::string &string, const std::string &delimiter)
    {
        std::vector<std::string> result;
        size_t                   last = 0,
                                 next = 0;

        while ((next = string.find(delimiter, last)) != std::string::npos) {
            result.emplace_back(string.substr(last, next - last));
            last = next + 1;
        }

        result.emplace_back(string.substr(last));

        return result;
    }

    /**
     * Read a CSV file and store its values into the given structure (Tuple with Formatter constructor)
     *
     * @tparam Tuple - The CSV line structure format
     *
     * @param path - The CSV file path
     * @param delimiter - The CSV values delimiter
     * @param formatter - The CSV values formatter that take a vector of strings in input and return a Tuple
     *
     * @return The CSV as vector of Tuple
     */
    template<typename Tuple>
    std::vector<Tuple> readCsv(const std::string &path, const std::string &delimiter, Formatter<Tuple> formatter)
    {
        std::ifstream      file(path, std::ifstream::in);
        std::string        line;
        std::vector<Tuple> result;

        if (file.fail()) {
            throw std::runtime_error("The file " + path + " could not be opened");
        }

        while (std::getline(file, line)) {
            result.emplace_back(formatter(split(line, delimiter)));
        }

        file.close();

        return result;
    }

    // Forward template declarations

    template std::vector<std::tuple<double, double, double>> readCsv<std::tuple<double, double, double>>(const std::string &, const std::string &, Formatter<std::tuple<double, double, double>>);
} // End of StringHelpers namespace

main.cpp (some usage)

#include "StringHelpers.hpp"

/**
 * Example of use with a CSV file which have (number,Red,Green,Blue) as line values. We do not want to use the 1st value
 * of the line.
 */
int main(int argc, char **argv)
{
    // Declare CSV line type, formatter and template type
    typedef std::tuple<double, double, double>                          CSV_format;
    typedef std::function<CSV_format(const std::vector<std::string> &)> formatterT;

    enum RGB { Red = 1, Green, Blue };

    const std::string COLOR_MAP_PATH = "/some/absolute/path";

    // Load the color map
    auto colorMap = StringHelpers::readCsv<CSV_format>(COLOR_MAP_PATH, ",", [](const std::vector<std::string> &values) {
        return CSV_format {
                // Here is the formatter lambda that convert each value from string to what you want
                std::strtod(values[Red].c_str(), nullptr),
                std::strtod(values[Green].c_str(), nullptr),
                std::strtod(values[Blue].c_str(), nullptr)
        };
    });

    // Use your colorMap as you  wish...
}
Sill answered 6/2, 2020 at 10:35 Comment(0)
S
0

A minor edition to @sastanin's solution, so that it can deal with newlines within quotes.

std::vector<std::vector<std::string>> readCSV(std::istream &in) {
    std::vector<std::vector<std::string>> table;

    while (!in.eof()) {
        CSVState state = CSVState::UnquotedField;
        std::vector<std::string> fields {""};
        size_t i = 0; // index of the current field
        for (char c : row) {
            switch (state) {
                case CSVState::UnquotedField:
                    switch (c) {
                        case ',': // end of field
                                  fields.push_back(""); i++;
                                  break;
                        case '"': state = CSVState::QuotedField;
                                  break;
                        default:  fields[i].push_back(c);
                                  break; }
                    break;
                case CSVState::QuotedField:
                    switch (c) {
                        case '"': state = CSVState::QuotedQuote;
                                  break;
                        default:  fields[i].push_back(c);
                                  break; }
                    break;
                case CSVState::QuotedQuote:
                    switch (c) {
                        case ',': // , after closing quote
                                  fields.push_back(""); i++;
                                  state = CSVState::UnquotedField;
                                  break;
                        case '"': // "" -> "
                                  fields[i].push_back('"');
                                  state = CSVState::QuotedField;
                                  break;
                        case '\n': // newline
                                  table.push_back(fields);
                                  state = CSVState::UnquotedField;
                                  fields = vector<string>{""};
                                  i = 0;
                        default:  // end of quote
                                  state = CSVState::UnquotedField;
                                  break; }
                    break;
            }
        }
    }
    return table;
}
Sweatt answered 7/2, 2020 at 14:30 Comment(0)
F
-1

CSV file are text files consist of lines, each line is consist of tokens sererated by comma. while there are something you should know when parsing:

(0) The file is encoded with "CP_ACP" code page. you should use the same encoding page to decode the file contents.

(1) the CSV lost "composite cells" information (likes rowspan > 1) , so when it is read back to excel, the composite cell information is lost.

(2) the cell text can be quoted by """ at head and tail, and literal quote char will become double quotes. so the closing matching quote char must be a quote char not followed by another quote char. for eg, if a cell has a comma, it must be quoted in csv, because comma make sense in csv.

(3) when the cell content have multiple lines, it will be quoted in CSV, in this case, you parser must keep reading next sereral lines in CSV file, until you got a closing quote char matching the first quote char, make sure the current logical line is read complete before parsing the line's tokens.

for eg: in csv file, the following 3 physical lines is one logical line consist of 3 tokens:

    --+----------
    1 |a,"b-first part
    2 |b-second part
    3 |b-third part",c
    --+----------
Florey answered 29/3, 2021 at 7:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.