Create conda recipe to use C extended Python library on PySpark cluster with Cloudera Data Science Workbench - Online Free Computer Tutorials.

'Software Development, Games Development, Mobile Development, iOS Development, Android Development, Window Phone Development. Dot Net, Window Services,WCF Services, Web Services, MVC, MySQL, SQL Server and Oracle Tutorials, Articles and their Resources

Tuesday, August 21, 2018

Create conda recipe to use C extended Python library on PySpark cluster with Cloudera Data Science Workbench

Cloudera Data Science Workbench provides data scientists with secure access to enterprise data with Python, R, and Scala. In the previous article, we introduced how to use your favorite Python libraries on an Apache Spark cluster with PySpark. In Python world, data scientists often want to use Python libraries, such as XGBoost, which includes C/C++ extension. This post shows how to solve this problem creating a conda recipe with C extension. Read more The post Create conda recipe to use C extended Python library on PySpark cluster with Cloudera Data Science Workbench appeared first on Cloudera Engineering Blog.


I guess you came to this post by searching similar kind of issues in any of the search engine and hope that this resolved your problem. If you find this tips useful, just drop a line below and share the link to others and who knows they might find it useful too.

Stay tuned to my blogtwitter or facebook to read more articles, tutorials, news, tips & tricks on various technology fields. Also Subscribe to our Newsletter with your Email ID to keep you updated on latest posts. We will send newsletter to your registered email address. We will not share your email address to anybody as we respect privacy.


This article is related to

CDH,Data Science,How-to,Spark,Cloudera Data Science Workbench,PySpark,python

No comments:

Post a Comment