Spark iterate HDFS directory

You can use org.apache.hadoop.fs.FileSystem. Specifically, FileSystem.listFiles([path], true)

And with Spark…

FileSystem.get(sc.hadoopConfiguration).listFiles(..., true)

Edit

It’s worth noting that good practice is to get the FileSystem that is associated with the Path‘s scheme.

path.getFileSystem(sc.hadoopConfiguration).listFiles(path, true)

Leave a Comment

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)