Your guide to getting data entry done for your business
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
From 126,953 reviews, clients rate our Data Extractors 4.9 out of 5 stars.Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
From 126,953 reviews, clients rate our Data Extractors 4.9 out of 5 stars.I am looking for someone who has experience in document analysis and data extraction to develop themes out of the data.
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
I'm looking for a qualified freelancer to develop a bot that can navigate the Almaviva Egypt website just like a human would. The bot must be capable of completing three key tasks: - Filling out all necessary appointment-related information - Selecting the date and time of the appointment - Submitting the request for the appointment Considering the constraints of the website, I require a bot that can still function proficiently with a limited number of appointment slots. Moreover, it must be programmed to input login credentials. A crucial requirement is that it can bypass or solve captcha verifications, ensuring a smooth booking process. The essential skillset for this project comprises expertise in Python, as the bot should be developed in this language. Familiarity with web scra...
I have a folder full of supplier bills in PDF format and I need a clean, repeatable Python script that pulls everything of value out of them and drops it neatly into an Excel workbook. Here is what I expect: • The script must capture every text field that appears on each bill (invoice number, dates, vendor, totals and any other descriptors). • It should identify and export any tabular line-item sections so that quantities, descriptions and prices land in true Excel rows and columns—not as a single block of text. • Embedded images or logos also need to be saved out (ideally into a sub-folder) with a reference back to the originating invoice inside the Excel sheet. Python tools such as pdfplumber, PyPDF2, camelot, tabula-py, pandas and openpyxl are all fine; c...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I want to turn my existing catalogues and knowledge base into smart, intuitive WhatsApp agents. Whether you prefer OpenAI’s Agent Builder or an n8n flow is up to you—as long as the final bots handle automation for user support and information dissemination flawlessly. Users should be able to ask a question, receive the right document or answer instantly, and feel as if they are speaking with a well-trained human agent. Alongside the chat experience, I need an end-to-end AI pipeline that automatically extracts raw data from the web, aggregates and cleans it, performs analysis, and then publishes clear visualisations—including map views—so insights are always one step away. I’m comfortable with tools such as Python, Pandas, LangChain, Node, SQL, Power BI, Table...
I am looking for a data entry specialist who has experience with hail maps. The project is small and simple. I am providing the sample data inside the attachment. Please look into the file. I know that this data is extracted from a hail trace map and it's free. But I don't know the map and don't know how to extract it. You need to show me this. Deliverable • Be able to extract geo targeted data selected from the map. • A video shown how to extract the exact data from the hail map. My budget is $20 for showing me this.
roject Explained Simply: Automated Fundamental Health Checker This project is a simple tool that checks a company’s financial health automatically. You enter a stock ticker (like TATASTEEL or SUNPHARMA), and the tool: Pulls key financial data such as Assets, Liabilities, Revenue, EBITDA, PAT, and OCI Calculates Total Equity by subtracting liabilities from assets Compares the Intrinsic Value with the current market price Gives a clear Buy / Hold / Sell signal The tool is not meant to be a complex valuation model. It’s a working MVP that shows end-to-end execution. The demo is simple: Enter the ticker Click run Watch the financial data populate See the final decision instantly Including OCI shows a deeper understanding that net profit alone doesn’t tell the ful...
I have a growing list of company names, and I need a small, reliable Python script that can: Automatically find each company’s career/jobs page where open positions are posted (pages may be built using HTML, JavaScript, or modern front-end frameworks) Navigate through all job listings, including: Pagination (page numbers, next/previous, etc.) “Load more” buttons Infinite scrolling Ability to fetch data from multiple pages (e.g., page 3, 4, or beyond) Apply job filters, especially location-based filtering, so that only job links for specific locations are collected Extract only individual job posting links after filters are applied Visit each job link and scrape complete job details, including: Job title Job description Location Employment type (if available) Department / ...
I need every bit of information currently stored in my Tally company—masters, vouchers, inventory, bank transactions, statutory ledgers, the lot—pulled out once and delivered in a clean, tabular Excel workbook. The extraction must be fully automated (TDL, ODBC, or any method you’re comfortable with) so I can rerun it later, but this engagement covers a single execution and hand-over. Deliverables • An Excel file where each dataset appears as a properly labeled table, with field names matching Tally, dates and numbers intact, ready for analysis or import elsewhere. We will provide Tally data file. Let me know which approach you prefer (TDL, ODBC, etc.) and how quickly you can turn the finished workbook around. Please also advise your working days and hours.
I need a self-contained utility that connects to the Databento API, authenticates with my key, and pulls historical Futures, Options, and 0DTE data, (for different instruments) then saves each dataset to well-structured CSV files for downstream data analysis. I already have a Databento account, so the focus is purely on coding the extractor and ensuring it is robust enough to handle large pulls without timing out or breaching rate limits. Key points to keep in mind • The tool must support clear parameter inputs (symbol, contract month, date range, and data type) and return the corresponding dataset in CSV. • Output files should follow a predictable naming convention and include headers exactly as provided by the API. • Error handling, retry logic, and concise logging ...
I need help transferring data from a small batch of Word documents—no more than five in total—into a clean, well-structured Excel workbook. All source files will be supplied in .docx format, and I’ll indicate exactly which fields, headings, and date or number formats I want in the spreadsheet. Here’s what the job involves: copying the required information, creating and naming columns as instructed, standardising dates and numeric values, and running a quick sweep to catch duplicates or obvious typos before you hand the file back. Accuracy matters more than speed; I will cross-check the sheet against the originals, so I’m looking for someone who is comfortable double-verifying their own work. Deliverables • One Excel file (.xlsx) containing all data fr...
I need an applied-AI and automation specialist who can look at my existing forensic accounting workflow, currently driven mostly in Excel alongside Word and Google Sheets—and build a repeatable, transparent pipeline that does the heavy lifting while still letting me apply professional judgment before anything goes out the door. Here is what happens today: I send small to mid-size companies a standardized list of financial and inventory reports that I need to perform an independent bank audit against their loan(s). These reports include year end, tax docs, inventory, insurance, banking docs, financial reconciliation reports, etc. I then upload all paperwork in BOX and reorganize everything for the next phase. Once captured, those figures get reconciled against a set of templated work...
1. Objective Develop a mandatory weekly vehicle checklist system fully operated via WhatsApp, with automated reminders, photo validation, cloud storage, and Excel tracking. Drivers must complete the checklist every Friday. The checklist is not considered completed until: All questionnaire fields are answered All required photos are uploaded The system validates everything 2. Input data (Excel – provided by client) An Excel file will be provided as the source of truth for vehicles and drivers. Sheet: VEHICLES_MASTER Each row = one vehicle Mandatory columns: VEHICLE_PLATE VEHICLE_TYPE → TRUCK or VAN DRIVER_NAME DRIVER_PHONE (WhatsApp number, international format) INTERNAL_ID (optional) DELEGATION (optional) This file determines: Who receives the WhatsApp messages...
FUNCTIONAL SPECIFICATION WhatsApp-Based Machine Photos & Document Management System Global Objective Use WhatsApp as the single input channel to automatically manage: General machine photos Machine identification plates Logistics documents (Delivery Notes / CMR / Transport Docs) Each image type must follow a separate, independent workflow, without mixing logic or data. FLOW 1 — GENERAL MACHINE PHOTOS Input Photos of machines sent via WhatsApp: Front, side, wheels, basket, display, etc. These images are not identification plates and not documents. System Logic The system must automatically detect a machine number visible in the image (painted number, sticker, marking). Example: 248 This detected number is used as the machine identifier. Cloud Storage If the folder d...
Please Read Carefully Before Applying It does not matter whether you consider yourself a “vibe coder” or a traditional software engineer we accept both here. What matters is whether you can make this system work reliably at scale. We operate a production scraper that processes 500+ leaderboard sites per hour. All sites we scrape are leaderboards, but no two sites are the same. This is not a basic scraper. What Makes This Scraper Different The leaderboards we scrape vary heavily in structure and behavior: Dynamic buttons, tabs, and switchers JavaScript-rendered content Hybrid navigation (UI interaction + background API calls) Tables, card layouts, podium layouts, or combinations of all three Masked usernames and inconsistent rank formats Different ordering of wager / prize data ...
I need a small proof-of-concept scraper written in Python that pulls user information from a set of static website pages and exports it into a clean CSV file. The pages load without JavaScript, so a lightweight stack such as requests + BeautifulSoup (or lxml) should be all that’s required; no browser automation is necessary unless you can justify a clear advantage. I will supply the page URLs and highlight the exact fields to capture (name, profile link, location, and any other visible user meta). Your code should handle pagination where applicable, respect polite crawl rates, and be easy for me to adjust if the HTML structure shifts. Deliverables • Well-commented Python script (.py) • Sample CSV containing the extracted records • README with setup steps and a qu...
PROJECT: AI-POWERED CONVERSATIONAL DOCUMENT CLOUD ACCESSIBLE VIA WHATSAPP 1. Overview The project consists of creating an intelligent document cloud, accessible primarily through WhatsApp, where users can ask anything related to the company (machines, documentation, spare parts, regulations, internal data, etc.), and the AI automatically returns the correct information, either as a text response or by delivering the exact PDF document required. This is not a traditional app and not a simple chatbot. It is the living memory of the company, organized in the cloud and accessed conversationally. 2. Entry Point: WhatsApp WhatsApp is the only access channel. The phone number identifies the user. There are no usernames or passwords. The system automatically recognizes: The phone number T...
I will supply several PDFs containing mixed text and numeric information, and I need every line transferred accurately into Excel within three days. The final workbook should be organized across multiple sheets rather than a single master tab. While the source files do not specify sheet titles, I’m open to your suggestions—please propose a clear, logical naming convention that makes navigation effortless. Accuracy is the top priority: totals must match the originals, text must be copied exactly, and no rows can be skipped. Once complete, return the finished .xlsx file plus any notes that explain your chosen sheet names or highlight ambiguous entries you want me to double-check.
I need a clean, well-structured extract of permit holder information from the WA State Labor & Industry online permit lookup (sometimes called the Permit Center). Whether you can do a fully automated scrape or need to do a manual pull is up to you—the key is accuracy and complete coverage. Scope • Visit the WA State L&I electrical permit lookup site and capture every record that appears in the public search results that: - Is for a generator or automatic transfer switch installation. - For the license numbers that will be given to you - for the timeframe given (5-6 years back). • Extract only the permit holder–related fields (name, address, and any other holder-specific details that the site exposes). • Return the ...
I need a reliable specialist who can log into our dealership’s backend every weekday, pull fresh customer information, and feed it straight into our call-tracking platform the same day. The only data I’m after are contact details and service records—nothing else—so the extraction script or manual process can stay laser-focused on those two fields for speed and accuracy. Turnaround is critical. If you can set this up and have the first full export/import cycle running smoothly right away, I’m happy to add a rush bonus on top of the agreed rate. Accuracy must be spot-on and the data has to land in the tracking system without duplicates or formatting hiccups. Deliverables each weekday: • Clean export of new customer contact details and service record...
I have a spreadsheet with 200 U S-based websites and I need the direct phone numbers of each owner. The numbers are not published on the sites themselves, so please pull them through your own account. Alongside every number, include the owner’s LinkedIn profile URL; no other fields are required. What I expect from you • A clean CSV or Google Sheet with three columns: Website, Owner Phone Number, LinkedIn Profile • Accuracy checked against Apollo’s latest data • Completion within 24 hours of project acceptance This is a quick job for an experienced user. I will review the sheet immediately and release payment within 24 hours once the data is verified.
I need a developer to collect data from multiple public websites and deliver it in a clean, structured format. This is for legitimate data extraction from publicly available pages. I will share the target URLs and exact data fields with shortlisted candidates. Scope of work Scrape data from multiple public websites (details shared after shortlisting) Extract specific fields consistently and handle pagination/filtering where needed Normalize/clean the data (remove duplicates, consistent formatting) Export results to CSV/Excel/JSON (format to be confirmed) Provide a repeatable solution (script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Resp...
I need a concise technical blueprint that shows exactly how to print in Cityworks environment at through SQL Server Management Studio. The goal is to make sure a field on the form "DUP. BAD CHECK “ L-If a Bad Check fee (L-BADCHECK) is attached to a case, it's owed. Status, type, etc. do not matter. Same for L-DUPLCTE. L-DUPBC - this is either or both a bad check fee and a duplicate fee from historic records (Pre-migration) 30 - BC 20 - Dupe 50 - Both (Rare, if it exists) the Cityworks front-end application, then landing it in the back-end SQL database on a batch schedule. You’ll be working with: • Cityworks PLL (latest build) • Microsoft SQL Server 2019 + SSMS What I expect from you: • A clear, step-by-step document that maps Citywor...
Several hundred rows of records are currently trapped in a mix of PDF documents and image files. I need every line transferred into a clean, well-structured Excel workbook within the next 2–3 days. Here is what the job entails: • Transcribe all data from the supplied PDFs and images into a single Excel file, keeping every field in its correct column. • Remove any duplicate rows, fix alignment or date/number inconsistencies, and delete entries that are clearly incomplete. • Apply quick, readable formatting—bold headers for every column and light color-coding to highlight key sections or totals—so the sheet is immediately understandable. • Insert basic formulas where relevant: SUM ranges for totals, IF statements for simple checks, and VLOOKUPs t...
I have a set of websites whose data I need to capture automatically, and I want the whole process built as a reusable Apify actor. I will share the exact URLs, the fields to be collected, and the desired output format once we agree to proceed, but the common theme is structured extraction (think product specs, profile info, or similar). Here’s the outcome I’m expecting: • A clean Node.js actor that runs on the Apify platform, uses the latest Apify SDK, and follows best practices for request queuing, proxy rotation, and error handling. • Configurable input schema so I can plug in new target URLs or tweak search parameters without touching the code. • Output saved to an Apify dataset (JSON/CSV) and pushed to my Google Drive via webhook on each successful r...
I need a clean pull of every location listed on For each branch please capture: country, state, complete address, service type, phone number, and email address. The final deliverable is a single Microsoft Excel workbook containing one sheet only. All columns should be clearly labelled and the range converted to an official Excel Table so I can apply native filters instantly. No additional filtering is required on your side; just be sure the table structure supports easy filtering by any column once I open the file. Accuracy matters more than speed—every location on the site has to be included and the contact details must match what is shown online. When you hand over the file I will spot-check a sample of entries against the live site to confirm completeness and correctness bef...
I need Octoparse templates built for roughly fifty manufacturer sites in the flooring & renovation niche. Each template must crawl the full product catalog and push clean, structured data into my Supabase database. The extraction scope includes: high-quality images, complete text descriptions and feature lists, links to warranty documents or other disclosures, detailed dimensions and specifications, style and color information, collection / color-family, and every SKU shown on the page. Price data is nice-to-have when present, but its absence should not break the run. Many product pages list matching accessories (trim, transitions, quarter-round, etc.). Your logic must identify those by shared style and color so they enter the database as related items. Typical sites you will start ...
I’m ready to hand over three full years of PDF statements for five separate bank and credit-card accounts and need every single transaction moved into Excel. You’ll be working with 2022, 2023 and 2024 files; each account should end up with its own workbook tab for each year so I can switch between them quickly. I’ll supply a sample sheet that shows the custom headers and layout I want—date, description, amount, balance, plus a few extra columns for categories and notes. Please keep that structure identical across all tabs. During extraction you’ll also need to clean the data: normalize dates, strip out blank rows, fix any OCR quirks, and make sure credits and debits sit in the correct signed columns. In short, I want a spreadsheet that’s ready for pivot...
Fraud Detection Platform - Extraction Accuracy & Expansion Project Title **Senior Python Developer Needed for Document Fraud Detection Platform (Ongoing)** --- Project Description I have an 80% complete document fraud detection platform (Fraud X) built with: - **Backend**: Python, FastAPI, PostgreSQL, asyncpg - **Frontend**: React - **Infrastructure**: DigitalOcean VPS, Nginx, Gunicorn/Uvicorn, HTTPS - **OCR**: Multi-provider (Google Document AI, AWS Textract, GPT Vision fallback) Current Status The core system is working: - File upload & scan lifecycle - Multi-provider OCR with scoring - Fraud engine with PASS/CAUTION/FAIL verdicts - Admin dashboard with evidence viewer - JWT authentication & role-based access What Needs to Be Fixed (Phase 1 - Immediate) **1. Paystub E...
I need an Apify actor that crawls a single website and delivers two things for every page. You can use Puppeteer, Playwright or any other Apify helper library that keeps the run stable and fast. Here’s how I see the workflow: • I’ll share the target domain, URL pattern, and the exact text blocks I care about. • You create or fork an Apify actor in JavaScript/TypeScript, configure the request queues, handle pagination where needed, and store results in a dataset. • The final dataset should export cleanly to JSON and CSV, and the image URLs should be downloadable in bulk (a simple link list or an Apify key-value export is fine). • When the crawl completes, I want a brief README so I can rerun it myself later without touching the code. Acceptance crit...
I’m looking for a dependable script or lightweight application that can collect sports betting odds from a web-based platform I have access to and export them into a structured Excel (XLSX) file. The initial focus will be on outright winner markets for: Golf Cycling Baseball The Excel output should remain clean and well-organized, grouping rows by sport, league, and event, so the data can be easily filtered and analyzed later. Update Frequency: Data refresh every 5 minutes Real-time or in-play updates are not required Accuracy and stability are more important than speed Technical Expectations: Ability to handle dynamic web content Robust approach that runs consistently over time Technology stack is flexible (Python, browser automation, or other suitable solutions) Clear...
I need two years' worth of bank statements converted from PDF to Excel for financial reporting purposes. The data must be organized by account and date. Additionally, please highlight any transactions related to Jeffrey Postell or Dristar Cleaning Service. Ideal skills and experience: - Proficiency in Excel and PDF conversion tools - Attention to detail and accuracy - Experience with financial data and statements
I need help moving structured information that currently sits inside a batch of Word files into my SQL database. Every document follows the same template—think headings like Name, Address, Reference ID, Dates, and a few numeric fields—so once you see one you’ll immediately understand the layout of the rest. The job is pure data extraction from documents. You’ll read each Word file, pull the fields exactly as they appear, and insert them into the SQL tables I’ll provide. I will supply: • a sample of the Word template • the SQL schema with column descriptions • a small set of completed rows so you can confirm formatting Accuracy is key; any typos or misplaced values will create reporting issues downstream. Please make sure each entry match...
Need previously scrapped truepeoplesearch data. Only bid if you already have the dataset.
I already have a scenario set up and ready to run—what I’m missing is an active, fully-functional LinkedIn Sales Navigator seat that I can connect my Vayne module to. If you currently hold such an account and can grant API or session-cookie access (whichever method you normally use for integrations), let’s work together. Once connected, I’ll handle the filtering logic inside , but I need your account to serve as the data source and, ideally, your guidance to be sure the pull limits stay within LinkedIn’s acceptable use. When you reply, focus on your experience—how you’ve successfully linked Sales Navigator with automation platforms before, any anti-scrape precautions you follow, and typical daily search volumes you’ve handled without issu...
I have a single-page TIFF that contains image data, and I need a small, well-structured Python project that will read that file, compute its colour histogram (or any other standard colour-profile information we agree on), and write the numeric results to a clean CSV file. I care as much about the code organisation as the actual extraction: classes, clear method separation, doc-strings and a simple command-line entry point are expected so I can drop the module straight into a larger .gal-based workflow. Feel free to rely on Pillow, OpenCV, NumPy or similar mainstream libraries as long as dependencies are listed in a requirements.txt. Deliverable • A self-contained Python package (Git repo or zip) with class-based implementation • One example script showing how to point it at a...
I track a long list of OTC tickers and need a hands-off way to grab every historical and new financial report that appears on the “Filings & Disclosure” section of otcmarkets.com. At the moment I only care about the PDFs of annual, quarterly and interim filings, but the solution should be flexible enough that I can later extend it to press releases or historical data if required. Here’s what I expect: • A script (preferably in Python 3 using requests / BeautifulSoup or Selenium if necessary) that accepts a plain text list of symbols, checks each page once per day and downloads any financial report that is not already saved. • Folder or filename logic that organises the PDFs by ticker and date so nothing is overwritten. • A simple log or CSV that r...
I need the full set of customer-related records lifted from my SQL database and dropped into a single worksheet that follows a template I’ll send you. The database holds tables for customers, payments, customer relations, sales and units. Your job is to write the necessary SQL (views, joins or stored procedures—whatever is most efficient), run it, and populate the template with: • each customer’s name and contact details • their complete purchase history and account status • the matching payment, sales and unit information Everything must end up in one sheet exactly where the template expects it, so column order and data types have to match. Once the file is filled, deliver it back together with the extraction script or query so I can repeat the proce...
I need a forensic specialist ideally based in Ho Chi Minh City to pull every scrap of data still living inside Zalo’s local SQLite stores on a Windows laptop. The focus is narrow and clear: recover all text messages plus any contact information linked to those chats; photos, videos, and other media can be ignored for now. You’re free to work with Magnet AXIOM, Cellebrite, Autopsy, X-Ways, or your own Python / sqlite3 scripts—whatever lets you prove the chain of custody and export clean, searchable tables. Deliverables • CSV or XLSX of every recovered message with timestamp, sender/receiver IDs, and thread reference • CSV or XLSX of the full contact list (UID, display name, phone/email where available) • A brief but reproducible methodology report d...
I need every bicycle, accessory, and clothing item currently sold on imported into my own e-commerce site. Each listing on my end must carry over the full product description, the current price, and all available images exactly as they appear on Bike24. My store is already live; what I am missing is a reliable, repeatable way to pull in this catalogue and keep it in sync. A custom scraper, API connector, or an import script that feeds directly into my CMS are all acceptable—as long as it works smoothly and can be rerun whenever Bike24 updates their stock. To be crystal-clear, the finished job is considered complete when: • All bikes, accessories, and clothing from Bike24 are visible on my site with correct titles, descriptions, prices, and image galleries. • Products ...
I need help moving pure numbers from my SQL databases into a clean, ready-to-use file. I will provide read-only credentials plus a mapping sheet that tells you exactly which tables and columns feed each field in the template. Your task is to run simple SELECT queries, copy the resulting figures, and paste them into the template without introducing even a single typo. The dataset is sizeable (several thousand rows), but it is straightforward—no calculations or analysis required, just precise data entry. A basic comfort level with SQL will make the process faster, yet nothing beyond standard SELECT statements is expected. Deliverables • Completed template file (.xlsx) filled with the requested numeric values • A brief note listing any missing or suspicious entries y...
I have a collection of scanned documents in PDF format that contain a mix of text and numerical information. I need every field transcribed accurately into an editable digital file—spreadsheets such as Excel or Google Sheets are ideal, but I’m open to any structured format you prefer as long as the data can be sorted and filtered later. Key points • Source: scanned PDFs only—no other file types involved. • Content: both textual descriptions and numbers appear throughout each page, so the work calls for keen attention to detail. • Accuracy: please proof-check your entries; I will spot-sample a portion of the output before sign-off. Feel free to use your preferred OCR tools as a starting point, yet final results must be clean and error-free. Let me ...
I have a protocol ready for a medical meta-analysis that centres on knee replacement. The aim is to move from a well-defined research question to a manuscript that can be submitted to a peer-reviewed journal. Scope of work The review will synthesise existing literature on knee replacement, with the current plan calling for the inclusion of studies categorised under “Meta-analysis.” I will share my preliminary search strings; you can refine them and run comprehensive searches in PubMed, Embase, Cochrane Library, Web of Science and any other databases you judge relevant. All steps must align with PRISMA and the Cochrane Handbook. Statistical tasks Use RevMan, Stata or R (meta/metafor) to: • extract study-level data, • calculate pooled effect sizes (risk ratio,...
This jon is being reposted because the previous worker collected the wrong table. We need someone to copy down 63 company tables from a website: 1- go to 2- Select the year 2021 (only that year). Alphabetically, the first company should be Aditya Birla Sun Life AMC Ltd. 3- There will be a list of company names, such as Antony Waste Handling Cell Ltd. Click on it. 4- Find the basis of the allotment table. The table has a fairly stable structure. Usually, the first column is called the category, and the second column is Applications or the number of applications. An example table is this: Category No. of Applications No. of Equity Shares applied Shares Reserved as per Prospectus No. of times Subscribed Amount (Rs) ------------------------------------------------------------------- Qual...
I have a collection of PDFs that contain a blend of text paragraphs, tables, and standalone figures. I need every piece of that information captured with zero loss of detail and transferred into an editable, structured format. You will receive the files in batches, and I expect each batch returned fully typed, logically organized, and double-checked for accuracy before submission. Because the source is digital, you are free to use OCR, data-capture software, or manual typing—whatever gives the cleanest result—but please proofread so the final output mirrors the PDFs exactly, including punctuation and number formatting. Deliverables • An editable file (Excel, Google Sheet, or Word—choose whichever best preserves layout) for each PDF supplied • A brief compl...
I need every record under the “CP” and “CPL” file-type options on pulled into a clean XLSX. Here’s the flow I use manually: • choose CP, then CPL from the File Type dropdown (that is the only filter I care about right now) • cycle through each File Number that appears • open the property page that pops up in a new tab or frame • copy the table cells for Owner Name, Father/Husband Name, Correspondence Address and Share. Doing this by hand is no longer practical, so I want you to recreate that exact sequence in code or with a scraping tool that can handle the site’s dynamic dropdowns and postbacks. This is intended as a one-time data pull, but I’m not opposed to receiving the script as well if it makes future updates ea...
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Learn how to hire and collaborate with a freelance Typeform Specialist to create impactful forms for your business.
A complete guide to finding, hiring, and working with a skilled freelance typist for your typing projects.