# Pastebin O59xlWaO py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 8 in stage 230.0 failed 4 times, most recent failure: Lost task 8.3 in stage 230.0 (TID 51145, 10.0.0.176, executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 217, in main func, profiler, deserializer, serializer = read_command(pickleSer, infile) File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 59, in read_command command = serializer._read_with_length(file) File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 170, in _read_with_length return self.loads(obj) File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 559, in loads return pickle.loads(obj, encoding=encoding) File "./listenbrainz_spark.zip/listenbrainz_spark/train_models.py", line 11, in from pyspark.mllib.recommendation import ALS, Rating File "/usr/local/spark/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 28, in import numpy ImportError: No module named 'numpy'