Someone asked how to do it based on a schema. Based on above responses, here is a simple example:
x= ''' 1 123121234 joe
2 234234234jill
3 345345345jane
4abcde12345jack'''
schema = [
("id",1,5),
("ssn",6,10),
("name",16,4)
]
with open("personfixed.csv", "w") as f:
f.write(x)
df = spark.read.text("personfixed.csv")
df.show()
df2 = df
for colinfo in schema:
df2 = df2.withColumn(colinfo[0], df2.value.substr(colinfo[1],colinfo[2]))
df2.show()
Here is the output:
+-------------------+
| value|
+-------------------+
| 1 123121234 joe|
| 2 234234234jill|
| 3 345345345jane|
| 4abcde12345jack|
+-------------------+
+-------------------+-----+----------+----+
| value| id| ssn|name|
+-------------------+-----+----------+----+
| 1 123121234 joe| 1| 123121234| joe|
| 2 234234234jill| 2| 234234234|jill|
| 3 345345345jane| 3| 345345345|jane|
| 4abcde12345jack| 4|abcde12345|jack|
+-------------------+-----+----------+----+