Filter

My recent searches
Filter by:
Budget
to
to
to
Type
Skills
Languages
    Job State
    2,000 hadoop jobs found, pricing in GBP

    I am looking for a skilled professional who can help with various aspects of data processing, storage and workflow management using Pyspark, SQL, Airflow, S3, Hadoop, and Kubernetes. Key areas of support required include: - Data processing in Pyspark: Assistance with data cleaning and transformation tasks in PySpark. - Querying databases in SQL: Advanced optimization techniques, custom queries and data retrieval. - Workflow management in Airflow: Custom workflow automation and scheduling tasks. - Data storage and manipulation in S3 and Hadoop: Support for large-scale data storage and processing. - Container orchestration in Kubernetes: Assistance with container deployment and scaling. Specifically, the ideal candidate should be able to set up tables on Iceberg in Parquet ...

    ÂŁ413 (Avg Bid)
    ÂŁ413 Avg Bid
    15 bids

    ...trends and new technology applications to enhance business and data systems. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer, Data Scientist, or similar role with a focus on data analysis and machine learning. Strong analytic skills related to working with unstructured datasets. Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with data pipeline and workflow management tools. Experience with AWS cloud services: EC2, EMR, RDS, Redshift. Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc. Proficiency in SQL and experience with relational databases, query authoring (SQL) as well as familiarity with a variety of databases...

    ÂŁ18 / hr (Avg Bid)
    ÂŁ18 / hr Avg Bid
    46 bids

    Seeking a skilled developer to optimize and enhance the architecture of our existing web scraper application. The application is currently built using NestJS and PostgreDB, and we are looking to scale it up and leverage cloud functionality for improved p...error handling, rate limiting, and IP rotation. - Strong problem-solving skills and ability to optimize application performance. - Excellent communication and collaboration skills. Nice to have: - Experience with PostgreDB and database optimization techniques. - Knowledge of additional programming languages like Python or Java. - Familiarity with data processing frameworks like Apache Spark or Hadoop. - Experience with data visualization and reporting tools. Potential for ongoing collaboration based on performance and future req...

    ÂŁ439 (Avg Bid)
    ÂŁ439 Avg Bid
    78 bids

    I need someone to create the dag and trigger it I am working on a migration project from hadoop to bigquery....more details will be shared via chat

    ÂŁ14 (Avg Bid)
    ÂŁ14 Avg Bid
    5 bids

    I am seeking a skilled data engineering trainer, speed in using Hadoop, Apache Spark, and SQL is paramount. Your expertise will guide me through nuanced uses of these technologies, with a particular focus on data migration. Key Requirements: - Proficiency in Hadoop, Apache Spark, and SQL - More than 10 hours availability weekly - Proven experience in real-world data migration projects Ideal candidates should have a flair for explaining complex concepts in simple language. This engagement will focus on moving data from diverse sources into a data warehouse, thereby making it readily available for business intelligence functions.

    ÂŁ6 / hr (Avg Bid)
    ÂŁ6 / hr Avg Bid
    6 bids

    I'm in need of a proficient professional versed in Java Hadoop cluster. Please place your bids immediately. $20 for this project

    ÂŁ2 / hr (Avg Bid)
    ÂŁ2 / hr Avg Bid
    3 bids

    I am in urgent need of Hadoop/Spark developer who is proficient in both Scala and Python for a data processing task. I have a huge volume of unstructured data that needs to be processed and analyzed swiftly and accurately. Key Project Responsibilities: - Scrubbing and cleaning the unstructured data to detect and correct errors. - Designing algorithms using Scala and Python to process data in Hadoop/Spark. - Ensuring effective data processing and overall system performance. The perfect fit for this role is a professional who has: - Expertise in Hadoop and Spark frameworks. - Proven experience in processing unstructured data. - Proficient coding skills in both Scala and Python. - Deep understanding of data structures and algorithms. - Familiarity with data ...

    ÂŁ20 / hr (Avg Bid)
    ÂŁ20 / hr Avg Bid
    39 bids

    ...and natural language processing 3. Strong proficiency in programming languages such as Python, Java, and C++, as well as web development frameworks like Node.js and React 4. Experience with cloud computing platforms such as AWS, Azure, or Google Cloud, and containerization technologies like Docker and Kubernetes 5. Familiarity with data engineering and analytics tools and techniques, such as Hadoop, Spark, and SQL 6. Excellent problem-solving and analytical skills, with the ability to break down complex technical challenges into manageable components and solutions 7. Strong project management and communication skills, with the ability to collaborate effectively with both technical and non-technical stakeholders 8. Familiarity with agile development methodologies and best pr...

    ÂŁ1285 (Avg Bid)
    NDA
    ÂŁ1285 Avg Bid
    97 bids

    We are looking for an Informatica BDM developer with 7+ yrs of experience, who can support us for 8 hours in a day from Mon - Friday. Title : Informatica BDM Developer Experience : 5 + Yrs 100%Remote Contract : Long term Timings: 10:30 am - 07:30 pm IST Required Skills: Informatica Data Engineering, DIS and MAS • Databricks, Hadoop • Relational SQL and NoSQL databases, including some of the following: Azure Synapse/SQL DW and SQL Database, SQL Server and Oracle • Core cloud services from at least one of the major providers in the market (Azure, AWS, Google) • Agile Methodologies, such as SCRUM • Task tracking tools, such as TFS and JIRA

    ÂŁ1037 (Avg Bid)
    ÂŁ1037 Avg Bid
    5 bids

    ...which will include parameters such as patient age ranges, geographical regions, social conditions, and specific types of cardiovascular diseases. Key responsibilities: - Process distributed data using Hadoop/MapReduce or Apache Spark - Developing an RNN model (preferably Python) - Analyzing the complex CSV data (5000+ records) - Identifying and predicting future trends based on age, region, types of diseases and other factors - Properly visualizing results in digestible diagrams Ideal candidates should have: - Experience in data analysis with Python - Solid understanding of Hadoop/MapReduce or Apache Spark - Proven ability in working with Recurrent Neural Networks - Excellent visualization skills to represent complex data in static or dynamic dashboards - Experien...

    ÂŁ389 (Avg Bid)
    ÂŁ389 Avg Bid
    84 bids

    I am looking for an experienced Senior Data Engineer for interview training. Your primary responsibility would be data cleaning and preprocessing, design and optimize database and implem...preprocessing, design and optimize database and implement ETL processes. Key responsibilities include: - Clean and preprocess data to ensure its quality and efficiency. - Design and optimize databases, aiming for both flexibility and speed. - Implement ETL (Extract, Transform, Load) processes to facilitate the effective and secure moving of data. Skills and Experience: - Proficient in Python, SQL, and Hadoop. - Expertise in handling medium-sized databases (1GB-1TB). - Proven track record in ETL processes handling. Your expertise in these areas will be crucial to the successful completion of thi...

    ÂŁ42 / hr (Avg Bid)
    ÂŁ42 / hr Avg Bid
    17 bids

    I have encountered a problem with my Hadoop project and need assistance. My system is showing ": HADOOP_HOME and are unset", and I am not certain if I've set the HADOOP_HOME and variables correctly. This happens creating a pipeline release in devops. In this project, I am looking for someone who: - Has extensive knowledge about Hadoop and its environment variables - Can determine whether I have set the HADOOP_HOME and variables correctly and resolve any issues regarding the same - Able to figure out the version of Hadoop installed on my system and solve compatibility issues if any I will pay for the solution immediately.

    ÂŁ18 / hr (Avg Bid)
    ÂŁ18 / hr Avg Bid
    15 bids

    *Title: Freelance Data Engineer* *Description:* We are seeking a talented freelance data engineer to join our team on a project basis. The ideal candidate will have a strong background in data engineering, with expertise in designing, implementing, and maintaining data pipelines and infrastructure. You will work closely with our data scientists and analysts to ensure the smooth flow of data from various sources to our data warehouse, and to support the development of analytics and machine learning solutions. This is a remote position with flexible hours. *Responsibilities:* - Design, build, and maintain scalable and efficient data pipelines to collect, process, and store large volumes of data from diverse sources. - Collaborate with data scientists and analysts to understand data require...

    ÂŁ66 (Avg Bid)
    ÂŁ66 Avg Bid
    3 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    ÂŁ134 (Avg Bid)
    ÂŁ134 Avg Bid
    14 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    ÂŁ144 (Avg Bid)
    ÂŁ144 Avg Bid
    26 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    ÂŁ17 (Avg Bid)
    ÂŁ17 Avg Bid
    11 bids

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    ÂŁ10 / hr (Avg Bid)
    ÂŁ10 / hr Avg Bid
    11 bids

    I am looking for a skilled professional who can efficiently set up an big data cluster. REQUIREMENTS: • Proficiency in Elasticsearch,Hadoop,Spark,Cassandra • Experience in working with large-scale data storage (10+ terabytes). • Able to structure data effectively. SPECIFIC TASKS INCLUDE: - Setting up the Elasticsearch,Hadoop,Spark,Cassandra big data cluster. - Ensuring the data to be stored is structured. - Prep for the ability to handle more than 10 terabytes of data. The ideal candidate will have substantial experience in large data structures and a deep understanding of the bigdata database technology. I encourage experts in big data management and those well-versed with the best practices of bigdata to bid for this project.

    ÂŁ24 / hr (Avg Bid)
    ÂŁ24 / hr Avg Bid
    3 bids

    We are looking for an Informatica BDM developer with 7+ yrs of experience, who can support us for 8 hours in a day from Mon - Friday. Title : Informatica BDM Developer Experience : 5 + Yrs 100%Remote Contract : Long term Timings: 10:30 am - 07:30 pm IST Required Skills: Informatica Data Engineering, DIS and MAS • Databricks, Hadoop • Relational SQL and NoSQL databases, including some of the following: Azure Synapse/SQL DW and SQL Database, SQL Server and Oracle • Core cloud services from at least one of the major providers in the market (Azure, AWS, Google) • Agile Methodologies, such as SCRUM • Task tracking tools, such as TFS and JIRA

    ÂŁ971 (Avg Bid)
    ÂŁ971 Avg Bid
    3 bids

    I am seeking a skilled professional proficient in managing big data tasks with Hadoop, Hive, and PySpark. The primary aim of this project involves processing and analyzing structured data. Key Tasks: - Implementing Hadoop, Hive, and PySpark for my project to analyze large volumes of structured data. - Use Hive and PySpark for sophisticated data analysis and processing techniques. Ideal Skills: - Proficiency in Hadoop ecosystem - Experience with Hive and PySpark - Strong background in working with structured data - Expertise in big data processing and data analysis - Excellent problem-solving and communication skills Deliverables: - Converting raw data into useful information using Hive and Visualizing the results of queries into the graphical representations. - C...

    ÂŁ14 / hr (Avg Bid)
    ÂŁ14 / hr Avg Bid
    15 bids

    ...currently seeking a Hadoop Professional with strong expertise in Pyspark for a multi-faceted project. Your responsibilities will extend to but not limited to: - Data analysis: You'll be working with diverse datasets including customer data, sales data and sensor data. Your role will involve deciphering this data, identifying key patterns and drawing out impactful insights. - Data processing: A major part of this role will be processing the mentioned datasets, and preparing them effectively for analysis. - Performance optimization: The ultimate aim is to enhance our customer targeting, boost sales revenue and identify patterns in sensor data. Utilizing your skills to optimize performance in these sectors will be highly appreciated. The ideal candidate will be skilled in ...

    ÂŁ369 (Avg Bid)
    ÂŁ369 Avg Bid
    25 bids

    ...R), and other BI essentials, join us for global projects. What We're Looking For: Business Intelligence Experts with Training Skills: Data analysis, visualization, and SQL Programming (Python, R) Business acumen and problem-solving Effective communication and domain expertise Data warehousing and modeling ETL processes and OLAP Statistical analysis and machine learning Big data technologies (Hadoop, Spark) Agile methodologies and data-driven decision-making Cloud technologies (AWS, Azure) and data security NoSQL databases and web scraping Natural Language Processing (NLP) and sentiment analysis API integration and data architecture Why Work With Us: Global Opportunities: Collaborate worldwide across diverse industries. Impactful Work: Empower businesses through data-drive...

    ÂŁ17 / hr (Avg Bid)
    ÂŁ17 / hr Avg Bid
    24 bids

    I'm launching an extensive project that needs a proficient expert in Google Cloud Platform (including BigQuery, GCS, Airflow/Composer), Hadoop, Java, Python, and Splunk. The selected candidate should display exemplary skills in these tools, and offer long-term support. Key Responsibilities: - Data analysis and reporting - Application development - Log monitoring and analysis Skills Requirements: - Google Cloud Platform (BigQuery, GCS, Airflow/Composer) - Hadoop - Java - Python - Splunk The data size is unknown at the moment, but proficiency in managing large datasets will be advantageous. Please place your bid taking into account all these factors. Your prior experience handling similar projects will be a plus. I look forward to working with a dedicated and know...

    ÂŁ389 (Avg Bid)
    ÂŁ389 Avg Bid
    53 bids

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    ÂŁ11 / hr (Avg Bid)
    ÂŁ11 / hr Avg Bid
    6 bids

    As an ecommerce platform looking to optimize our data management, I require assistance with several key aspects of my AWS big data project, including: - Data lake setup and configuration - Development of AWS Glue jobs - Deployment of Hadoop and Spark clusters - Kafka data streaming The freelancer hired for this project must possess expertise in AWS, Kafka, and Hadoop. Strong experience with AWS Glue is essential given the heavy utilization planned for the tool throughout the project. Your suggestions and recommendations regarding these tools and technologies will be heartily welcomed, but keep in mind specific tools are needed to successfully complete this project.

    ÂŁ673 (Avg Bid)
    ÂŁ673 Avg Bid
    20 bids

    ...Queries: Write a SQL query to find the second highest salary. Design a database schema for a given problem statement. Optimize a given SQL query. Solution Design: Design a parking lot system using object-oriented principles. Propose a data model for an e-commerce platform. Outline an approach to scale a given algorithm for large datasets. Big Data Technologies (if applicable): Basic questions on Hadoop, Spark, or other big data tools. How to handle large datasets efficiently. Writing map-reduce jobs (if relevant to the role). Statistical Analysis and Data Processing: Write a program to calculate statistical measures like mean, median, mode. Implement data normalization or standardization techniques. Process and analyze large datasets using Python libraries like Pandas. Rememb...

    ÂŁ6 / hr (Avg Bid)
    ÂŁ6 / hr Avg Bid
    36 bids

    ...customer-centric software products · Analyze existing software implementations to identify areas of improvement and provide deadline estimates for implementing new features · Develop software applications using technologies that include and not limited to core Java (11+ ), Kafka or messaging system, Web Frameworks like Struts / Spring, relational (Oracle) and non-relational databases (SQL, MongoDB, Hadoop, etc), with RESTful microservice architecture · Implement security and data protection features · Update and maintain documentation for team processes, best practices, and software runbooks · Collaborating with git in a multi-developer team · Appreciation for clean and well documented code · Contribution to database design ...

    ÂŁ1115 (Avg Bid)
    ÂŁ1115 Avg Bid
    50 bids

    Project Title: Advanced Hadoop Administrator Description: - We are seeking an advanced Hadoop administrator for an inhouse Hadoop setup project. - The ideal candidate should have extensive experience and expertise in Hadoop administration. - The main tasks of the Hadoop administrator will include data processing, data storage, and data analysis. - The project is expected to be completed in less than a month. - The Hadoop administrator will be responsible for ensuring the smooth functioning of the Hadoop system and optimizing its performance. - The candidate should have a deep understanding of Hadoop architecture, configuration, and troubleshooting. - Experience in managing large-scale data processing and storage environments is requi...

    ÂŁ247 (Avg Bid)
    ÂŁ247 Avg Bid
    3 bids

    I am looking for a freelancer to help me with a Proof of Concept (POC) project focusing on Hadoop. Requirement: We drop a file in HDFS, which is then pushed to Spark or Kafka and it pushes final output/results into a database. Objective is to show we can handle million of records as input and put it in destination. The POC should be completed within 3-4 days and should have a simple level of complexity. Skills and experience required: - Strong knowledge and experience with Hadoop - Familiarity with HDFS and Kafka/Spark - Ability to quickly understand and implement a simple POC project - Good problem-solving skills and attention to detail

    ÂŁ135 (Avg Bid)
    ÂŁ135 Avg Bid
    9 bids

    ...of DataNode 3: Mike Set the last two digits of the IP address of each DataNode: IP address of DataNode 1: IP address of DataNode 2: IP address of DataNode 3: Submission Requirements: Submit the following screenshots: Use commands to create three directories on HDFS, named after the first name of each team member. Use commands to upload the Hadoop package to HDFS. Use commands to show the IP addresses of all DataNodes. Provide detailed information (ls -l) of the blocks on each DataNode. Provide detailed information (ls -l) of the fsimage file and edit log file. Include screenshots of the Overview module, Startup Process module, DataNodes module, and Browse Directory module on the Web UI of HDFS. MapReduce Temperature Analysis You are

    ÂŁ12 (Avg Bid)
    ÂŁ12 Avg Bid
    2 bids

    Big data project in java needed to be done in 24 hrs. Person needs to be experienced in spark. hadoop.

    ÂŁ105 (Avg Bid)
    ÂŁ105 Avg Bid
    10 bids

    Looking for hadoop specialist to design the query optimisation design . Currently when the search is made its getting freezing when the user tries to run more than one search at a time . Need to implement a solution . This is a remote project . Share your idea first if you have done any such work . Here the UI is in React and Backend is in Node js .

    ÂŁ13 / hr (Avg Bid)
    ÂŁ13 / hr Avg Bid
    38 bids

    #Your code goes here import '' import '' def jbytes(*args) { |arg| arg.to_s.to_java_bytes } end def put_many(table_name, row, column_values) table = (@, table_name) p = (*jbytes(row)) do |column, value| family, qualifier = (':') (jbytes(family, qualifier), jbytes(value)) end (p) end # Call put_many function with sample data put_many 'wiki', 'DevOps', { "text:" => "What DevOps IaC do you use?", "revision:author" => "Frayad Gebrehana", "revision:comment" => "Terraform" } # Get data from the 'wiki' table get 'wiki', 'DevOps' #Do not remove the exit call below exit

    ÂŁ48 (Avg Bid)
    ÂŁ48 Avg Bid
    7 bids

    I am in need of assistance with Hadoop for the installation and setup of the platform. Skills and experience required: - Proficiency in Hadoop installation and setup - Knowledge of different versions of Hadoop (Hadoop 1.x and Hadoop 2.x) - Ability to work within a tight timeline (project needs to be completed within 7 hours) Please note that there is no specific preference for the version of Hadoop to be used.

    ÂŁ10 (Avg Bid)
    ÂŁ10 Avg Bid
    2 bids

    Wordpress Black theme Design in photo Images can take from udemy Content here Content Coupon Code: 90OFFOCT23 (subscribe by 7 Oct’23 or till stock lasts) Data Engineering Career Path: Big Data Hadoop and Spark with Scala: Scala Programming In-Depth: Apache Spark In-Depth (Spark with Scala): DP-900: Microsoft Azure Data Fundamentals: Data Science Career Path: Data Analysis In-Depth (With Python): https://www

    ÂŁ6 (Avg Bid)
    Guaranteed
    ÂŁ6
    4 entries

    Seeking an expert in both Hadoop and Spark to assist with various big data projects. The ideal candidate should have intermediate level expertise in both Hadoop and Spark. Skills and experience needed for the job: - Proficiency in Hadoop and Spark - Intermediate level expertise in Hadoop and Spark - Strong understanding of big data concepts and tools - Experience working on big data projects - Familiarity with data processing and analysis using Hadoop and Spark - Ability to troubleshoot and optimize big data tools - Strong problem-solving skills and attention to detail

    ÂŁ18 / hr (Avg Bid)
    ÂŁ18 / hr Avg Bid
    12 bids

    I am looking for a freelancer to compare the performance metrics of Hadoop, Spark, and Kafka using the data that I will provide. Skills and experience required: - Strong knowledge of big data processing architectures, specifically Hadoop, Spark, and Kafka - Proficiency in analyzing and comparing performance metrics - Ability to present findings through written analysis, graphs and charts, and tables and figures The comparison should focus on key performance metrics such as processing speed, scalability, fault tolerance, throughput, and latency. The freelancer should be able to provide a comprehensive analysis of these metrics and present them in a clear and visually appealing manner. I will explain more about the data

    ÂŁ125 (Avg Bid)
    ÂŁ125 Avg Bid
    23 bids

    Looking for Hadoop Hive Experts I am seeking experienced Hadoop Hive experts for a personal project. Requirements: - Advanced level of expertise in Hadoop Hive - Strong understanding of big data processing and analysis - Proficient in Hive query language (HQL) - Experience with data warehousing and ETL processes - Familiarity with Apache Hadoop ecosystem tools (e.g., HDFS, MapReduce) - Ability to optimize and tune Hadoop Hive queries for performance If you have a deep understanding of Hadoop Hive and can effectively analyze and process big data, then this project is for you. Please provide examples of your previous work in Hadoop Hive and any relevant certifications or qualifications. I am flexible with the timeframe for completing the...

    ÂŁ16 (Avg Bid)
    ÂŁ16 Avg Bid
    2 bids

    I am looking for a Kafka Admin who can assist me with the following tasks: - Onboarding Kafka cluster - Managing Kafka topics and partitions - Its already available in the company and we need to onboard it for our project . -Should be able to Size and scope . - We will start with small data ingestion from Hadoop datalake . -Should be willing to work on remote machine . The ideal candidate should have experience in: - Setting up and configuring Kafka clusters - Managing Kafka topics and partitions - Troubleshooting Kafka performance issues The client already has all the necessary hardware and software for the Kafka cluster setup.

    ÂŁ14 / hr (Avg Bid)
    ÂŁ14 / hr Avg Bid
    10 bids

    Over the past years, I have devoted myself to a project involving Algorithmic Trading. My system leverages only pricing and volume data at market closing. It studies technical indicators for every stock in the S&P 500 from its IPO date, testing all possible indicator 'settings', as I prefer to call them. This process uncovers microscopic signals that suggest beneficial buying at market close and selling at the next day's close. Any signal with a p-value below 0.01 is added to my portfolio. Following this, the system removes correlated signals to prevent duplication. A Bayesian ranking of signals is calculated, and correlated signals with a lower rank are eliminated. The result is a daily optimized portfolio of buy/sell signals. This system, primarily built with numpy...

    ÂŁ30 / hr (Avg Bid)
    NDA
    ÂŁ30 / hr Avg Bid
    13 bids

    I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.

    ÂŁ197 (Avg Bid)
    ÂŁ197 Avg Bid
    4 bids

    I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.

    ÂŁ10 (Avg Bid)
    ÂŁ10 Avg Bid
    3 bids

    1: model and implement efficient big data solutions for various application areas using appropriately selected algorithms and data structures. 2: analyse methods and algorithms, to compare and evaluate them with respect to time and space requirements and make appropriate design choices when solving real-world problems. 3: motivate and explai...choices when solving real-world problems. 3: motivate and explain trade-offs in big data processing technique design and analysis in written and oral form. 4: explain the Big Data Fundamentals, including the evolution of Big Data, the characteristics of Big Data and the challenges introduced. 6: apply the novel architectures and platforms introduced for Big data, i.e., Hadoop, MapReduce and Spark complex problems on Hadoop execution pl...

    ÂŁ102 (Avg Bid)
    ÂŁ102 Avg Bid
    9 bids

    I am looking for a freelancer who can help me with an issue I am fac...who can help me with an issue I am facing with launching Apache Gobblin in YARN. Here are the details of the project: Error Message: NoClassDefFoundError (Please note that this question was skipped, so the error message may not be accurate) Apache Gobblin Version: 2.0.0 YARN Configuration: Not sure Skills and Experience: - Strong knowledge and experience with Apache Gobblin - Expertise in Hadoop,YARN configuration and troubleshooting - Familiarity with Interrupt exception and related issues - Ability to diagnose and resolve issues in a timely manner - Excellent communication skills to effectively collaborate with me and understand the problem If you have the required skills and experience, please bid on thi...

    ÂŁ20 / hr (Avg Bid)
    ÂŁ20 / hr Avg Bid
    10 bids

    Write MapReduce programs that give you a chance to develop an understanding of principles when solving complex problems on the Hadoop execution platform.

    ÂŁ20 (Avg Bid)
    ÂŁ20 Avg Bid
    9 bids

    It's java hadoop mapreduce task. The program should run on windows OS. An algorithm must be devised and implemented that can recognize the language of a given text. Thank you.

    ÂŁ26 (Avg Bid)
    ÂŁ26 Avg Bid
    8 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation

    ÂŁ26 (Avg Bid)
    ÂŁ26 Avg Bid
    8 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation

    ÂŁ24 (Avg Bid)
    ÂŁ24 Avg Bid
    6 bids

    I am looking for an advanced Hadoop trainer for an online training program. I have some specific topics to be covered as part of the program, and it is essential that the trainer can provide in-depth knowledge and expertise in Hadoop. The topics to be discussed include Big Data technologies, Hadoop administration, Data warehousing, MapReduce, HDFS Architecture, Cluster Management, Real Time Processing, HBase, Apache Sqoop, and Flume. Of course, the trainer should also have good working knowledge about other Big Data topics and techniques. In addition to the topics mentioned, the successful candidate must also demonstrate the ability to tailor the course to meet the learner’s individual needs, making sure that the classes are engaging and fun. The trainer must ...

    ÂŁ11 / hr (Avg Bid)
    ÂŁ11 / hr Avg Bid
    1 bids

    I am looking for a freelancer with some experience in working with Hadoop and Spark, specifically in setting up a logging platform. I need full assistance in setting up the platform and answering analytical questions using log files within Hadoop. Ideal skills and experience for this project include: - Experience working with Hadoop and Spark - Knowledge of setting up logging platforms - Analytical skills to answer questions using log files

    ÂŁ33 (Avg Bid)
    ÂŁ33 Avg Bid
    4 bids

    Top hadoop Community Articles