
Software Engineer
at Service company, helps businesses harness the power of data
About the Company:
It is a Data as a Service company that helps businesses harness the power of data. Our technology fuels some of the most interesting big data projects of the word. We are a small bunch of people working towards shaping the imminent data-driven future by solving some of its fundamental and toughest challenges.
Role: We are looking for an experienced team lead to drive data acquisition projects end to end. In this role, you will be working in the web scraping team with data engineers, helping them solve complex web problems and mentor them along the way. You’ll be adept at delivering large-scale web crawling projects, breaking down barriers for your team and planning at a higher level, and getting into the detail to make things happen when needed.
Responsibilities
- Interface with clients and sales team to translate functional requirements into technical requirements
- Plan and estimate tasks with your team, in collaboration with the delivery managers
- Engineer complex data acquisition projects
- Guide and mentor your team of engineers
- Anticipate issues that might arise and proactively consider those into design
- Perform code reviews and suggest design changes
Prerequisites
- Between 2-3 years of relevant experience
- Fluent programming skills and well-versed with scripting languages like Python or Ruby
- Solid foundation in data structures and algorithms
- Excellent tech troubleshooting skills
- Good understanding of web data landscape
- Prior exposure to DOM, XPATH and hands on experience with selenium/automated testing is a plus

Similar jobs
Job Title : Python Developer – API Integration & AWS Deployment
Experience : 5+ Years
Location : Bangalore
Work Mode : Onsite
Job Overview :
We are seeking an experienced Python Developer with strong expertise in API development and AWS cloud deployment.
The ideal candidate will be responsible for building scalable RESTful APIs, automating power system simulations using PSS®E (psspy), and deploying automation workflows securely and efficiently on AWS.
Mandatory Skills : Python, FastAPI/Flask, PSS®E (psspy), RESTful API Development, AWS (EC2, Lambda, S3, EFS, API Gateway), AWS IAM, CloudWatch.
Key Responsibilities :
Python Development & API Integration :
- Design, build, and maintain RESTful APIs using FastAPI or Flask to interface with PSS®E.
- Automate simulations and workflows using the PSS®E Python API (psspy).
- Implement robust bulk case processing, result extraction, and automated reporting systems.
AWS Cloud Deployment :
- Deploy APIs and automation pipelines using AWS services such as EC2, Lambda, S3, EFS, and API Gateway.
- Apply cloud-native best practices to ensure reliability, scalability, and cost efficiency.
- Manage secure access control using AWS IAM, API keys, and implement monitoring using CloudWatch.
Required Skills :
- 5+ Years of professional experience in Python development.
- Hands-on experience with RESTful API development (FastAPI/Flask).
- Solid experience working with PSS®E and its psspy Python API.
- Strong understanding of AWS services, deployment, and best practices.
- Proficiency in automation, scripting, and report generation.
- Knowledge of cloud security and monitoring tools like IAM and CloudWatch.
Good to Have :
- Experience in power system simulation and electrical engineering concepts.
- Familiarity with CI/CD tools for AWS deployments.
Procedure is hiring for Drover.
This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.
About Drover
Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.
We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.
Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.
We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.
About The Role
As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.
Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.
What You'll Do
- Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
- Design and implement services to support wearable devices, mobile app, and backend API
- Implement data processing and storage pipelines
- Create and maintain Infrastructure-as-Code
- Support the engineering team across all aspects of early-stage development -- after all, this is a startup
Requirements
- 5+ years of experience developing cloud architecture on AWS
- In-depth understanding of various AWS services, especially those related to IoT
- Expertise in cloud-hosted, event-driven, serverless architectures
- Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
- Experience with networking and socket programming
- Experience with Kubernetes or similar orchestration platforms
- Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
- Familiarity with relational databases (PostgreSQL)
- Familiarity with Continuous Integration and Continuous Deployment (CI/CD)
Nice To Have
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field
bout the Role
We are seeking an experienced Python Data Engineer with a strong foundation in API and basic UI development. This role is essential for advancing our analytics capabilities for AI products, helping us gain deeper insights into product performance and driving data-backed improvements. If you have a background in AI/ML, familiarity with large language models (LLMs), and a solid grasp of Python libraries for AI, we’d like to connect!
Key Responsibilities
• Develop Analytics Framework: Build a comprehensive analytics framework to evaluate and monitor AI product performance and business value.
• Define KPIs with Stakeholders: Collaborate with key stakeholders to establish and measure KPIs that gauge AI product maturity and impact.
• Data Analysis for Actionable Insights: Dive into complex data sets to identify patterns and provide actionable insights to support product improvements.
• Data Collection & Processing: Lead data collection, cleaning, and processing to ensure high-quality, actionable data for analysis.
• Clear Reporting of Findings: Present findings to stakeholders in a clear, concise manner, emphasizing actionable insights.
Required Skills
• Technical Skills:
o Proficiency in Python, including experience with key AI/ML libraries.
o Basic knowledge of UI and API development.
o Understanding of large language models (LLMs) and experience using them effectively.
• Analytical & Communication Skills:
o Strong problem-solving skills to address complex, ambiguous challenges.
o Ability to translate data insights into understandable reports for non-technical stakeholders.
o Knowledge of machine learning algorithms and frameworks to assess AI product effectiveness.
o Experience in statistical methods to interpret data and build metrics frameworks.
o Skilled in quantitative analysis to drive actionable insights.
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Selecting appropriate Cloud services to design and deploy an application based on given requirements
- Migrating complex, multi-tier applications on Cloud Platforms
- Designing and deploying enterprise-wide scalable operations on Cloud Platforms
- Implementing cost-control strategies
- Developing and maintain the CI/ CD pipeline for the assigned projects
- Conducting code reviews, and make technical contributions to product architecture
- Getting involved in solving bugs and delivering small features
- Fostering technical decision making on the team, but taking final decisions when necessary
- Understanding engineering metrics and seeking to improve them
- Understanding the requirements from the Product team, plan and execute.
What you need to have:
- Expert in designing Software and System architecture.
- Must have knowledge of Python (PHP knowledge is a plus) and related tools.
- Must understand MySQL queries and optimization.
- Must be able to build high performance teams.
- Must have worked with similar technologies: redis, docker, AWS, elasticsearch.
- Must know about microservice architectures and CI/ CD pipelines.
- Must be great at planning, researching and communicating.
- Must have a good understanding of application metrics.
Your Responsibilities would be to:
- Architect new and optimize existing software codebases and systems used to crawl, launch, run, and monitor the Anakin family of app crawlers
- Deeply own the lifecycle of software, including rolling out to operations, managing configurations, maintaining and upgrading, and supporting end-users
- Configure and optimize the automated testing and deployment systems used to maintain over 1000+ crawlers across the company
- Analyze data and bugs that require in-depth investigations
- Interface directly with external customers including managing relationships and steering requirements
Basic Qualifications:
- Extremely effective, self-driven builder
- 2+ years of experience as a backend software engineer
- 2+ years of experience with Python
- 2+ years of experience with AWS services such as EC2, S3, Lambda, etc.
- Should have managed a team of software engineers
- Deep experience with network debugging across all OSI layers (Wireshark)
- Knowledge of networks or/and cybersecurity
Preferred Skills and Experience
- Broad understanding of the landscape of software engineering design patterns and principles
- Ability to work quickly and accurately in a highly stressful environment during removing bugs in run-time within minutes
- Excellent communicator, both written and verbal
Additional Requirements
- Must be available to work extended hours and weekends when needed to meet critical deadlines
- Must have an aversion to politics and BS. Should let his/her work speak for him/her.
- Must be comfortable with uncertainty. In almost all the cases, your job will be to figure it out.
- Must not be bounded to comfort zone. Often, you will need to challenge yourself to go above and beyond.
- Write lots of bug-free, efficient, scalable and reusable code.
- Write unit tests and take responsibility for the quality of your own code.
- Coach, encourage and mentor your fellow software developers to do the same.
- Consult with product owners to define, scope and plan new features.
- Test, evaluate and recommend technologies to improve the overall product.
- Be a key participant in the Agile process.
- Produce excellent documentation.
- Undertake and implement processes for smoother and efficient deployment of code base
- Maintain code base as it grows bigger and scales
What we value
- 2-3 Years of experience building and shipping API’s using python based frameworks
- Proficiency with NoSql Databases (Elastic, Mongo) is a must
- Experience in working with Amazon Cloud Services like SNS, SQS, VPC, etc. is preferred
- Experience with databases migration and system re-architecture is valued
- Ability to write modular, reusable, and clean code
- Comfortable with ticket management and documentation
Good knowledge and experience of working with backend systems;
designing server-side architecture, including production maintenance experience are must-haves.
At least 1-2 years of experience in any programming languages like Java, Ruby, PHP, Python and Node.js(Node.js preferred).
Understanding of micro-services oriented architecture.
Experience with Databases design (SQL, NoSQL) and analytics
Experience in driving and delivering complex features/software modules from technical design to launch.
Expertise with unit testing & Test Driven Development (TDD)











