%pyspark in Zeppelin: No module named pyspark error
Gregory Van Seghbroeck
gregory.vanseghbroeck at intec.ugent.be
Tue Jul 12 08:31:37 UTC 2016
Hi Kevin,
Thanks for the response! Really like the juju and canonical community.
I can tell you the juju version. This is 1.25.3.
The status will be a problem, since I removed most of the services. This being said, I don’t think we are already using the bigtop spark charms, so this might be the problem. Here a list of the services I deployed before:
- cs:trusty/apache-hadoop-namenode-2
- cs:trusty/apache-hadoop-resourcemanager-3
- cs:trusty/apache-hadoop-slave-2
- cs:trusty/apache-hadoop-plugin-14
- cs:trusty/apache-spark-9
- cs:trusty/apache-zeppelin-7
The reason why we don’t use the bigtop charms yet, is that we see problems with the hostnames on the containers. Some of the relations use hostnames, but these cannot be resolved. So I have to add the mapping between IPs and hostnames manually to the /etc/hosts file.
The image I pasted in, showing our environment, was a screenshot of the Zeppelin environment. These parameters looked oké from what I could find online.
Kind Regards,
Gregory
From: Kevin Monroe [mailto:kevin.monroe at canonical.com]
Sent: Monday, July 11, 2016 7:20 PM
To: Gregory Van Seghbroeck <gregory.vanseghbroeck at intec.ugent.be>
Cc: bigdata at lists.ubuntu.com
Subject: Re: %pyspark in Zeppelin: No module named pyspark error
Hi Gregory,
I wasn't able to see your data after "Our environment is set up as follows:"
<big black box for me>
Will you reply with the output (or a pastebin link) with the following:
juju version
juju status --format=tabular
Kostas has found a potential zeppelin issue in the bigtop charms where the bigtop spark offering may be too old. Knowing your juju and charm versions will help me know if your issue is related.
Thanks!
-Kevin
On Mon, Jul 11, 2016 at 7:36 AM, Gregory Van Seghbroeck <gregory.vanseghbroeck at intec.ugent.be <mailto:gregory.vanseghbroeck at intec.ugent.be> > wrote:
Dear,
We have deployed Zeppelin with juju and connected it to Spark. According to juju everything went well. We can see this is indeed the case; when we try to execute one of the Zeppelin tutorials we see some nice graphs. However, if we try to use the python interpreter (%pyspark) we always get an error.
Our environment is set up as follows:
Do you have any pointers to what can be wrongly configured?
The fact that the ‘PYTHONPATH’ variable is this long, is because I restarted the unit with the extra ‘/usr/lib/spark/python/lib/pyspark.zip’ on the path.
Kind Regards,
Gregory
--
Bigdata mailing list
Bigdata at lists.ubuntu.com <mailto:Bigdata at lists.ubuntu.com>
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/bigdata
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/bigdata/attachments/20160712/adc98f8c/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 262700 bytes
Desc: not available
URL: <https://lists.ubuntu.com/archives/bigdata/attachments/20160712/adc98f8c/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image006.jpg
Type: image/jpeg
Size: 17900 bytes
Desc: not available
URL: <https://lists.ubuntu.com/archives/bigdata/attachments/20160712/adc98f8c/attachment-0003.jpg>
More information about the Bigdata
mailing list