PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
Chat - (gpt-5.5)
Token size
1,350
Example input
[PASTE RESUME HERE]: ALEX MORGAN Junior Python Developer | Web Scraping | Data Automation Email: alex.morgan.test@email.com Phone: +1 555 014 7821 LinkedIn: linkedin.com/in/alex-morgan-test GitHub: github.com/alexmorgan-dev Location: Remote | Open to Hybrid SUMMARY Junior Python Developer with experience in web scraping, data extraction, API integrations, and automation workflows. Skilled in building Python scripts for collecting, cleaning, validating, and exporting structured data. Comfortable working with dynamic websites, REST APIs, CSV/JSON datasets, and basic SQL databases. Strong interest in data engineering, automation, and AI-assisted development. CORE SKILLS Python • Web Scraping • Selenium • Playwright • BeautifulSoup REST APIs • JSON • CSV • Pandas • SQLite • PostgreSQL Data Cleaning • Data Validation • Automation Scripts Git • Linux • Docker Basics • Google Sheets API AI Tools • Prompt Engineering • Debugging with LLMs PROFESSIONAL EXPERIENCE Junior Python Developer IBM — Remote Sep 2023 – Present Built Python scripts to extract, clean, and organize structured data from public websites and internal sources. Used Selenium and BeautifulSoup to collect product, pricing, and market data from multi-page websites. Created CSV and JSON export pipelines for analysts and business users. Integrated several REST APIs to enrich datasets with external metadata. Improved data validation logic to reduce duplicate and incomplete records. Used Git for version control and worked with basic Linux command-line tools. Python Automation Intern Accenture — New York, NY Jun 2022 – Aug 2023 Assisted in building automation scripts for repetitive data-entry and reporting tasks. Cleaned and transformed Excel and CSV files using Pandas. Created simple internal tools to merge datasets and generate weekly reports. Documented scripts and workflows for non-technical team members. PROJECTS Marketplace Price Tracker Built a Python scraper that collected product names, prices, ratings, and availability from e-commerce pages. Used Selenium for dynamic pages and Pandas for cleaning and exporting results. Added basic retry logic and duplicate detection. API-Based Job Listing Collector Created a script that collected job listings from public APIs and stored them in SQLite. Added filters by location, seniority level, technology, and remote status. EDUCATION University of California, Berkeley Bachelor of Arts in Computer Science 2018 – 2022 LANGUAGES English — Native Spanish — Intermediate [PASTE PREFERENCES HERE]: - [PASTE EXTRA INSTRUCTIONS HERE]: -
Example output
I’ll treat the placeholder preferences as “no additional preferences provided” and use the resume only. I’ll search for current roles around junior Python, web scraping, data automation, API/data workflows, and remote/hybrid eligibility.I found a few strong active matches and also several apparent non-matches: some scraping jobs are inactive, and some “Python automation” roles require senior/AWS/architect-level experience. I’ll prioritize realistic junior/entry roles and mark stretch roles separately rather than tailoring the resume toward jobs that are clearly too senior.Thought for 1m 12sStage 1 — Resume analysis Main professional specialization: Junior Python Developer focused on web scraping, data extraction, API integrations, data cleaning, and automation workflows. Seniority level: Junior. The resume shows roughly 2.5+ years in junior/intern Python automation roles, but not enough evidence for mid-level ownership, architecture, cloud deployment, or production data engineering. Strongest skills: Python, Selenium, Playwright, BeautifulSoup, REST APIs, CSV/JSON, Pandas, SQLite/PostgreSQL basics, data cleaning, data validation, Git, Linux basics, Google Sheets API, AI-assisted debugging. Strongest experience areas: Web scraping and browser automation. Structured data export for analysts/business users. REST API integration and dataset enrichment. Data cleaning, validation, duplicate detection, and reporting automation. Transferable skills: Data QA, workflow automation, analyst support, technical documentation, working with messy web data, building small internal tools. Weaknesses / missing qualifications: No Scrapy, Airflow, dbt, cloud deployment, AWS/GCP/Azure, Docker beyond basics, CI/CD, async scraping, proxies, CAPTCHA/MFA handling, production monitoring, formal data engineering stack, or advanced statistics/ML. Best-fitting job titles: Junior Python Developer, Junior Web Scraping Developer, Python Automation Developer, Data Scraping Engineer, Junior Data Engineer, Data Automation Engineer, Entry-Level Data Scientist, Data Quality Engineer, Data Collection Engineer. Stage 2 — Preferences used No real preferences were provided because the placeholders were not filled in. I used only: Remote | Open to Hybrid, English native, Spanish intermediate, and the resume’s junior Python/data automation profile. Stage 3 — Relevant job openings found RankJobLocation / formatType / payFitWhy it fitsWeak or missing1Junior Vulnerability Automation Engineer — Flashpoint Remote, USA only Full-time; salary not listed 8.5/10Entry-level, Python, Git, PostgreSQL, web scraping interest, scalable data workflows, messy HTML/data parsing. Vulnerability/security domain, Airflow/NiFi, distributed systems.2Data Scientist — FINTRX Boston, MA; remote option, EST hours required Full-time; competitive salary listed, no exact number; no visa sponsorship 8/10Entry-level, Python, SQL, web scraping, data aggregation, AI/process improvement. Statistics/modeling, Snowflake/GCP, stronger data science framing.3Freelance Data Scraping Engineer — Mindrift Fully remote, own schedule Freelance/part-time; up to $32/hour equivalent 7.5/10Very close technical match: Python scraping, Selenium/BeautifulSoup, APIs, data cleaning, validation, dynamic sites, LLM workflows. Requires 3+ years relevant experience; proxies/Apify/OpenRouter not in resume.4Remote Junior Data Engineer — Python Web Scraping & QA — Madfish Listed as remote, but posting also says “No home office possible,” so status is unclear. Full-time; compensation not listed 7/10Junior data engineering, Python scrapers, extraction, normalization, validation pipelines. Listing quality/format ambiguity; limited details.5Python Scraping Developer — OnHires Remote, Peru restriction shown Full-time; salary not listed 6.5/10Strong match on Python scraping, Playwright/Selenium/BeautifulSoup, REST APIs, Docker/Linux/Git. More experienced role: high-volume scraping, async, proxy rotation, monitoring, NoSQL.6Junior Data Engineer — Databricks — Addepto Warsaw / Cracow / Wroclaw / Bialystok / Remote B2B, 8,400–15,120 PLN/month 6/10Junior level, Python, data processing, pipelines, remote option. Missing Databricks, Spark, Airflow/Dagster, dbt, Azure, CI/CD, Terraform, data modeling.7Automation Developer / Advocate — Scrapfly Remote-first, EU-aligned timezone UTC±3 Contractor model, paid in USD 5.5/10Web scraping company; Python used in scraping engine/test automation; automation integrations are relevant. Requires 2+ years software development plus JavaScript, SDKs, docs/tutorials, developer advocacy.SkipPython Automation Engineer — iCapital Salt Lake City, office/hybridSenior/VP-level2/10Uses Python, Playwright, scraping, APIs.Requires 10+ years for VP role, AWS Lambda/Step Functions/S3/CloudWatch/Secrets Manager, MFA/CAPTCHA/auth architecture. Best shortlist: Flashpoint, FINTRX, Mindrift, Madfish, OnHires. Best jobs to actually tailor for: Flashpoint, FINTRX, Mindrift. These are the strongest combination of realism, skill match, and resume-supported keywords. Tailored Resume 1 — Flashpoint ALEX MORGAN Junior Python Developer | Data Automation | Web Scraping Email: alex.morgan.test@email.com Phone: +1 555 014 7821 LinkedIn: linkedin.com/in/alex-morgan-test GitHub: github.com/alexmorgan-dev Location: Remote | Open to Hybrid Summary Junior Python Developer with experience building automation scripts, web scraping workflows, and structured data pipelines. Skilled in extracting, parsing, cleaning, validating, and exporting data from public websites, internal sources, REST APIs, and multi-page web structures. Comfortable working with Python, Selenium, BeautifulSoup, Playwright, SQL databases, CSV/JSON datasets, Git, and Linux command-line tools. Interested in scalable data workflows, vulnerability intelligence data collection, and transforming messy web data into reliable structured outputs. Core Skills Python • Web Scraping • Data Automation • Selenium • Playwright • BeautifulSoup HTML Parsing • REST APIs • JSON • CSV • Pandas • SQLite • PostgreSQL Data Cleaning • Data Validation • Duplicate Detection • Structured Data Export Git • Linux • Docker Basics • Google Sheets API AI Tools • LLM-Assisted Debugging • Technical Documentation Professional Experience Junior Python Developer IBM — Remote Sep 2023 – Present Built Python scripts to extract, clean, validate, and organize structured data from public websites and internal sources. Used Selenium and BeautifulSoup to collect product, pricing, and market data from multi-page websites with changing page structures. Created CSV and JSON export pipelines for analysts and business users. Integrated REST APIs to enrich datasets with external metadata. Improved validation logic to reduce duplicate and incomplete records. Used Git for version control and Linux command-line tools for development workflows. Python Automation Intern Accenture — New York, NY Jun 2022 – Aug 2023 Assisted in building Python automation scripts for repetitive data-entry and reporting workflows. Cleaned and transformed Excel and CSV files using Pandas. Created internal tools to merge datasets and generate weekly reports. Documented scripts and workflows for non-technical team members. Projects Marketplace Price Tracker Built a Python scraper that collected product names, prices, ratings, and availability from e-commerce pages. Used Selenium for dynamic pages and Pandas for cleaning and exporting results. Added retry logic and duplicate detection to improve data reliability. API-Based Job Listing Collector Created a script that collected job listings from public APIs and stored them in SQLite. Added filters by location, seniority level, technology, and remote status. Education University of California, Berkeley Bachelor of Arts in Computer Science 2018 – 2022 Languages English — Native Spanish — Intermediate Report: Target: Junior Vulnerability Automation Engineer — Flashpoint. Fit score: 8.5/10. Changed: Reframed the profile around messy HTML, parsing, data workflows, validation, PostgreSQL, Git, and structured intelligence-style data. Added supported keywords: web scraping, HTML parsing, PostgreSQL, scalable data workflows, data validation, structured data export, REST APIs. Still weak: vulnerability intelligence, Airflow/NiFi, security advisories, distributed data systems. Worth applying: Yes. This is the strongest match. Application message: I’m applying for the Junior Vulnerability Automation Engineer role because my background is closely aligned with Python-based data collection, web scraping, parsing, and validation workflows. In my current role, I build scripts that extract, clean, validate, and export structured data from websites, internal sources, and REST APIs. I have hands-on experience with Selenium, BeautifulSoup, Playwright, PostgreSQL/SQLite basics, Git, and Linux, and I’m interested in applying these skills to vulnerability intelligence data workflows. I would bring strong attention to data quality, practical automation experience, and a willingness to grow into the security domain. Tailored Resume 2 — FINTRX ALEX MORGAN Entry-Level Data Scientist | Python | SQL | Web Scraping Email: alex.morgan.test@email.com Phone: +1 555 014 7821 LinkedIn: linkedin.com/in/alex-morgan-test GitHub: github.com/alexmorgan-dev Location: Remote | Open to Hybrid Summary Entry-level Python data professional with experience in web scraping, data aggregation, data cleaning, API integrations, and structured dataset preparation. Skilled in using Python, SQL, Pandas, REST APIs, CSV/JSON workflows, and validation logic to collect, organize, and improve data for analysts and business users. Strong interest in data science, data quality, process improvement, and AI-assisted development. Core Skills Python • SQL • Pandas • Data Cleaning • Data Validation Web Scraping • Selenium • Playwright • BeautifulSoup REST APIs • JSON • CSV • SQLite • PostgreSQL Data Aggregation • Reporting Automation • Dataset Enrichment Git • Linux • Google Sheets API • AI Tools • LLM Workflows Professional Experience Junior Python Developer IBM — Remote Sep 2023 – Present Built Python workflows to collect, clean, validate, and organize structured data for analysts and business users. Used Selenium and BeautifulSoup to gather product, pricing, and market data from multi-page websites. Created CSV and JSON export pipelines to support business reporting and downstream analysis. Integrated REST APIs to enrich internal datasets with external metadata. Improved validation logic to reduce duplicate and incomplete records. Used Git and Linux command-line tools in daily development workflows. Python Automation Intern Accenture — New York, NY Jun 2022 – Aug 2023 Assisted in automating repetitive data-entry and reporting tasks with Python. Cleaned and transformed Excel and CSV files using Pandas. Built simple internal tools to merge datasets and generate weekly reports. Documented data workflows for non-technical users. Projects Marketplace Price Tracker Built a scraper to collect product names, prices, ratings, and availability from e-commerce pages. Used Selenium for dynamic web pages and Pandas for cleaning and exporting data. Added retry logic and duplicate detection. API-Based Job Listing Collector Created a Python script that collected job listings from public APIs and stored them in SQLite. Added filters by location, seniority level, technology, and remote status. Education University of California, Berkeley Bachelor of Arts in Computer Science 2018 – 2022 Languages English — Native Spanish — Intermediate Report: Target: Data Scientist — FINTRX. Fit score: 8/10. Changed: Reframed the resume from “developer” toward “entry-level data professional,” highlighting Python, SQL, data aggregation, AI tools, reporting, and dataset quality. Added supported keywords: data aggregation, Python, SQL, web scraping, data cleaning, process improvement, AI-assisted development. Still weak: statistical modeling, regression models, Snowflake, GCP, SFTP, formal data science projects. Worth applying: Yes, especially if the candidate is open to EST hours and does not need visa sponsorship. Application message: I’m interested in the Data Scientist role at FINTRX because it combines Python, SQL, web scraping, data aggregation, and process improvement, which match my current experience. I have built Python workflows for collecting, cleaning, validating, and exporting structured datasets from websites, APIs, and internal sources. I’m comfortable supporting analysts and business users with reliable data outputs, and I’m also interested in using AI tools to improve data workflows. I would bring practical data automation experience and a strong focus on data accuracy. Tailored Resume 3 — Mindrift ALEX MORGAN Python Data Scraping Engineer | Web Scraping | Data Validation Email: alex.morgan.test@email.com Phone: +1 555 014 7821 LinkedIn: linkedin.com/in/alex-morgan-test GitHub: github.com/alexmorgan-dev Location: Remote | Open to Hybrid Summary Junior Python Developer with hands-on experience in web scraping, data extraction, API-based data collection, and structured dataset delivery. Skilled in building Python scripts that collect, clean, validate, deduplicate, and export data from dynamic websites, REST APIs, CSV/JSON sources, and basic SQL databases. Experienced with Selenium, Playwright, BeautifulSoup, Pandas, SQLite, PostgreSQL, Google Sheets API, and AI-assisted debugging workflows. Strong fit for remote freelance data scraping projects requiring accuracy, independent troubleshooting, and reliable structured outputs. Core Skills Python • Web Scraping • Data Extraction • Data Processing Selenium • Playwright • BeautifulSoup • REST APIs Dynamic Websites • Multi-Page Scraping • Retry Logic Pandas • CSV • JSON • SQLite • PostgreSQL • Google Sheets API Data Cleaning • Data Validation • Duplicate Detection • Structured Dataset Delivery AI Tools • LLM Workflows • Prompt Engineering • Git • Linux • Docker Basics Professional Experience Junior Python Developer IBM — Remote Sep 2023 – Present Built Python scripts to extract, clean, organize, and validate structured data from public websites and internal sources. Used Selenium and BeautifulSoup to collect product, pricing, and market data from multi-page websites. Created CSV and JSON export pipelines for analysts and business users. Integrated REST APIs to enrich datasets with external metadata. Improved validation logic to reduce duplicate and incomplete records. Used Git, Linux command-line tools, and AI-assisted debugging workflows to improve development efficiency. Python Automation Intern Accenture — New York, NY Jun 2022 – Aug 2023 Assisted in building automation scripts for repetitive data-entry and reporting tasks. Cleaned and transformed Excel and CSV files using Pandas. Created simple internal tools to merge datasets and generate weekly reports. Documented scripts and workflows for non-technical team members. Projects Marketplace Price Tracker Built a Python scraper that collected product names, prices, ratings, and availability from e-commerce pages. Used Selenium for dynamic pages and Pandas for cleaning and exporting results. Added retry logic and duplicate detection. API-Based Job Listing Collector Created a script that collected job listings from public APIs and stored them in SQLite. Added filters by location, seniority level, technology, and remote status. Education University of California, Berkeley Bachelor of Arts in Computer Science 2018 – 2022 Languages English — Native Spanish — Intermediate Report: Target: Freelance Data Scraping Engineer — Mindrift. Fit score: 7.5/10. Changed: Focused the resume on end-to-end scraping workflows, dynamic sites, structured dataset delivery, validation, CSV/JSON/Google Sheets, and AI-assisted workflows. Added supported keywords: data scraping, data extraction, data processing, BeautifulSoup, Selenium, APIs, LLM workflows, data cleaning, data validation, dynamic websites. Still weak: Apify, OpenRouter, proxies, large-scale batching/parallelization, explicit 3+ years of scraping-specific experience. Worth applying: Yes, but as a realistic stretch because of the 3-year requirement. Application message: I’m interested in the Freelance Data Scraping Engineer role because my experience is focused on Python web scraping, data extraction, validation, and structured dataset delivery. I have built scrapers for multi-page and dynamic websites using Selenium and BeautifulSoup, integrated REST APIs, and exported clean datasets in CSV, JSON, SQLite, and Google Sheets workflows. I also use AI tools and LLM-assisted debugging to improve development speed and troubleshoot scraping issues. I would bring careful data validation, independent execution, and practical Python automation experience to the project. Stage 6 — Final summary Best matches: Flashpoint — strongest overall match. FINTRX — strong entry-level data role, but data science/statistics should be strengthened. Mindrift — strong technical match, but the 3-year requirement makes it a stretch. Realistic stretch applications: Madfish, OnHires, Addepto. Apply only with a tailored version that is honest about current skills and emphasizes willingness to learn production data engineering. Jobs to skip: iCapital VP/AVP-level Python Automation Engineer. It is too senior and requires AWS/serverless/authentication architecture not supported by the resume. Strongest tailored resume version: Flashpoint. It uses the candidate’s strongest supported experience: Python, scraping, parsing, data validation, PostgreSQL/SQL basics, Git, and messy data workflows. Keywords to add to the original resume: HTML parsing, data pipelines, data quality, structured data, data aggregation, API integration, PostgreSQL, dynamic websites, validation checks, duplicate detection, workflow automation, LLM-assisted debugging. Skills to improve next: Scrapy, Requests, HTTP fundamentals, async scraping with asyncio/aiohttp, Airflow, Docker, PostgreSQL depth, cloud basics, CI/CD, logging/monitoring, data pipeline design, basic statistics, and one cloud data warehouse tool. Job titles to search for: Junior Python Developer, Junior Web Scraping Developer, Python Automation Developer, Data Scraping Engineer, Data Collection Engineer, Junior Data Engineer, Data Quality Engineer, Entry-Level Data Scientist, Data Automation Engineer. Reasonable salary / rate range: Based on the openings found, junior remote scraping/data roles vary widely. Concrete examples include Mindrift up to $32/hour equivalent and Addepto 8,400–15,120 PLN/month B2B. For US entry-level Python/data roles, a cautious target would be roughly $60k–$90k/year, while freelance scraping work could reasonably target $25–$40/hour depending on complexity and location restrictions.
🤖 GPT

Resume And Job Search Assistant

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
GPT-5.5
Tested icon
Guide icon
4 examples icon
Free credits icon
Universal AI prompt for resume analysis, job search, and resume tailoring. It analyzes any resume, identifies suitable roles, finds relevant job openings, ranks the best matches, creates tailored resume versions, and writes short application messages. ATS-friendly, truthful, and suitable for different industries, levels, and work formats.
...more
Added 4 days ago
Report
Browse Marketplace