PySpark Jobs
PySpark is a Python library that makes it easy to write applications that process data in Apache Spark. Using PySpark, you can write richer and more powerful data processing programs using the skills you already have with Python.
Hire PySpark ExpertsIt’s pyspark code, we need to optimise it
There are few R scripts that need to be converted to Pyspark. The work is almost done and need some logic to be implemented and tested out.
Expert level knowledge in AWS services like EMR, S3. Extensive experience in python, pyspark, Hadoop, hue, presto, bash shell script Expert in Apache airflow, creating and troubleshooting DAGs Good troubleshooting skills Good experience with Talend Experience with Vertica
Looking for an Azure Data Engineer from India for our Enterprise Client (INDIVIDUALS ONLY). Teams/Enterprises/Consultancies/Agencies please stay away. Project Duration: 3-6 months Location: Remote/WFH. Hours Required: 40 hrs/week Responsibilities • Design Azure Data Lake solution, partition strategy for files • Explore and load data from structured and semi-structured data sources into ADLS and Azure Blob Storage • Ingest and transform data using Azure Data Factory • Ingest and transform data using Azure Databricks and PySpark • Design and build data engineering pipelines using Azure Data Factory • Implement data pipelines for full load and incremental data loads • Design and build error handling, data quality routines using Data factory a...