
Similar jobs
Role Overview
We are looking for an Automated QA Test Engineer (3–4 years experience) to design and implement
automated testing frameworks that ensure the quality and reliability of Hosted.ai’s core platform. The ideal
candidate will have hands-on experience with Pytest, Python scripting, and test automation systems,
along with the ability to architect test harnesses, plan test coverage, and triage bugs effectively.
Key Responsibilities
● Design and develop automated test frameworks and test harness logic.
● Implement end-to-end, integration, and regression tests using Pytest and Python.
● Define and execute test coverage plans for critical components of the platform.
● Conduct bug analysis, triage, and root cause identification in collaboration with
engineering teams.● Ensure tests are reliable, repeatable, and integrated into CI/CD pipelines.
●Continuously improve test automation practices and tooling.
● Document test strategies, results, and defect reports clearly.
Requirements
● 3–4 years of experience in software QA, with a focus on test automation.
● Strong background in manual testing.
● Strong hands-on experience with Pytest for UI and end-to-end testing.
● Proficiency in Python coding for test development and scripting.
● Experience architecting test harnesses and automation frameworks.
● Familiarity with CI/CD pipelines and version control systems (Git).
● Solid understanding of QA methodologies, test planning, and coverage
strategies.
● Strong debugging, analytical, and problem-solving skills.
Nice to Have
● Experience testing distributed systems, APIs, or cloud-native platforms.
● Exposure to performance/load testing tools.● Familiarity with Kubernetes, containers, or GPU-based workloads.
About UpSolve
Work on cutting-edge tech stack. Build innovative solutions. Computer Vision, NLP, Video Analytics and IOT.
Job Role
- Ideate use cases to include recent tech releases.
- Discuss business plans and assist teams in aligning with dynamic KPIs.
- Design solution architecture from input to infrastructure and services used to data store.
Job Requirements
- Working knowledge about Azure Cognitive Services.
- Project Experience in building AI solutions like Chatbots, sentiment analysis, Image Classification, etc.
- Quick Learner and Problem Solver.
Job Qualifications
- Work Experience: 2 years +
- Education: Computer Science/IT Engineer
- Location: Mumbai
Trying to get in touch with you all for an exciting role for 1 of the Startup Firm into Wealtth Mangement .
A small description about the Company.
This Company is building the platform to drive Wealth Mangement .
They own and operate an online investing platform that distributes mutual funds in India. Its platform allows investors to buy and sell equity, debt, and tax saving mutual funds. It has its headquarters in Bengaluru in India.
Looking for Great Talent for Backend Developer with beloww skills.
• Excellent knowledge of at least one ecosystem based on Elixir/Phoenix, Ruby/Rails, Python/Django,
Go/Scala/Clojure
• Good OO skills, including strong design patterns knowledge
• Familiar with datastores like MySQL, PostgreSQL, Redis, Redshift etc.
• Familiarity with react.js/react-native, vue.js etc. • Knowledge of deploying software to AWS, GCP, Azure
• Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration.
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.
Role – Data Engineer
Responsibilities
Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
Assemble large, complex data sets from third-party vendors to meet business
requirements.
Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.
Requirements
5+ years of experience in a Data Engineer role.
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
We are establishing infrastructure for internal and external reporting using Tableau and are looking for someone with experience building visualizations and dashboards in Tableau and using Tableau Server to deliver them to internal and external users.
Required Experience
- Implementation of interactive visualizations using Tableau Desktop
- Integration with Tableau Server and support of production dashboards and embedded reports with it
- Writing and optimization of SQL queries
- Proficient in Python including the use of Pandas and numpy libraries to perform data exploration and analysis
- 3 years of experience working as a Software Engineer / Senior Software Engineer
- Bachelors in Engineering – can be Electronic and comm , Computer , IT
- Well versed with Basic Data Structures Algorithms and system design
- Should be capable of working well in a team – and should possess very good communication skills
- Self-motivated and fun to work with and organized
- Productive and efficient working remotely
- Test driven mindset with a knack for finding issues and problems at earlier stages of development
- Interest in learning and picking up a wide range of cutting edge technologies
- Should be curious and interested in learning some Data science related concepts and domain knowledge
- Work alongside other engineers on the team to elevate technology and consistently apply best practices
Highly Desirable
- Data Analytics
- Experience in AWS cloud or any cloud technologies
- Experience in BigData technologies and streaming like – pyspark, kafka is a big plus
- Shell scripting
- Preferred tech stack – Python, Rest API, Microservices, Flask/Fast API, pandas, numpy, linux, shell scripting, Airflow, pyspark
- Has a strong backend experience – and worked with Microservices and Rest API’s - Flask, FastAPI, Databases Relational and Non-relational
Job Description:
Candidate must have 2 to 5-years of experience in various phase of development of a python-based application or API development.
Requirements:
Must Have: Strong expertise in PYTHON and its built-in data structures, developing API using flask or fastapi, data wrangling using standard python frameworks like pandas, NumPy etc., integration of various applications (third party or inhouse) with python.
Good to Have: Deployment using Nginx, GUnicorn, IIS, docker and Kubernetes etc. Good understanding of optimization solutions like differential evolution. Hands on experience with python packages like Scipy. Tkinter to make python-based applications.
Roles and Responsibilities:
- Understanding/gathering the requirements from stakeholder, formulating the problem, and leveraging AI/ML to solve the business problems.
- Integration of third party or in house application with python solution.
- Working on Deployment and optimization of various engineering problems using Numerical/Constraint optimization frameworks such as Differential Evolution and deploy a web-based API using Flask and Nginx/IIS.
- Unit testing of various python modules developed and testing of the API.
Looking Data Enginner for our OWn organization-
Notice Period- 15-30 days
CTC- upto 15 lpa
Preferred Technical Expertise
- Expertise in Python programming.
- Proficient in Pandas/Numpy Libraries.
- Experience with Django framework and API Development.
- Proficient in writing complex queries using SQL
- Hands on experience with Apache Airflow.
- Experience with source code versioning tools such as GIT, Bitbucket etc.
Good to have Skills:
- Create and maintain Optimal Data Pipeline Architecture
- Experienced in handling large structured data.
- Demonstrated ability in solutions covering data ingestion, data cleansing, ETL, Data mart creation and exposing data for consumers.
- Experience with any cloud platform (GCP is a plus)
- Experience with JQuery, HTML, Javascript, CSS is a plus.
Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.
Responsibilities and Duties :
- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.
- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions
Education level :
- Bachelor's degree in Computer Science or equivalent
Experience :
- Minimum 3+ years relevant experience working on production grade projects experience in hands on, end to end software development
- Expertise in application, data and infrastructure architecture disciplines
- Expert designing data integrations using ETL and other data integration patterns
- Advanced knowledge of architecture, design and business processes
Proficiency in :
- Modern programming languages like Java, Python, Scala
- Big Data technologies Hadoop, Spark, HIVE, Kafka
- Writing decently optimized SQL queries
- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)
- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions
- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.
- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.
- Experience generating physical data models and the associated DDL from logical data models.
- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.
- Experience enforcing data modeling standards and procedures.
- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Skills :
Must Know :
- Core big-data concepts
- Spark - PySpark/Scala
- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)
- Handling of various file formats
- Cloud platform - AWS/Azure/GCP
- Orchestration tool - Airflow









