I used Python and d6tstack like @citynorman but because I had 24-million-lines in 200 CSV files, that approach was killing my development database server.
This approach gives you a lot of control and performance in 2 or 3 lines of code. It pulled the 24-million rows into a five-index-column MySQL table, and added data clean-ups, in around 2-minutes. The csv import tool on MySQL Workbench was taking days to do the same thing.
Here's what I made work:
import pandas as pd
import importlib
import d6tstack.combine_csv as d6tc
import d6tstack
import glob
import pymysql # This approach also supports other MySQL connectors
from sqlalchemy import create_engine
engine = create_engine("mysql+pymysql://usr:pass@host:3306/db")
# For testing just pull in one or two csv files - and then take all
# My data had a ; semicolon separator, so change this to your use case if needed
df = d6tc.CombinerCSV(glob.glob('C:/Users/user/Downloads/csvfiles/*.csv'), sep=';').to_pandas()
# Remove Filepath and Filename
df.drop(columns=["filepath","filename"],inplace=True, axis=1)
# I created Indexes in my database file during testing, so this line
# makes sure there are no null index values in the CSVs
df = df[df['country'].notna()]
# chunksize throttles your database updates so as not to overwhelm any buffers
# NEVER use "if_exists=replace", unless you want to blank your table 100%
df.to_sql(name='table', con=engine, if_exists='append', index=False, chunksize=200)