If you're trying to automate I'd suggest this:
In your cluster's bootstrap script, copy the jar file from s3 into a readable location, sort of like so:
#!/bin/bash
aws s3 cp s3://path_to_your_file.jar /home/hadoop/
then in your cluster's software settings (in EMR UI on cluster creation) set the classpath properties:
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.extraClassPath": "/home/hadoop/path_to_your_file.jar",
"spark.jars": "/home/hadoop/path_to_your_file.jar"
}
}
]
(you can add extra properties here like spark.executor.extraClassPath or spark.driver.userClassPathFirst)
then launch your cluster and it should be available thru imports.
I had to log into the primary node and run spark-shell to see where the import was located (by typing in import com.
and pressing tab to auto complete (theres probably an easier way to do this))
then I was able to import and use the class in zeppelin/jupyter