


About the role:
· Understand the project test requirements.
· Execute and evaluate automated test cases/suites and report test results to ensure that system functionality satisfies acceptance criteria.
· Create and maintain automation scripts using the Robot Framework
· Help create acceptance criteria for user stories.
· Log bug reports based on test execution.
· Collaborate with the development team to prioritize test scenarios.
· Collaborate with the development team to perform root cause analysis.
· Other duties assigned.
Position Requirements:
Essential Skills:
· Knowledgeable in Software Automation using Robot Framework
· Knowledgeable in API testing (Postman)
· Experience working in an Agile environment using Scrum or Kanban
· Proficiency in scripting and programming languages (Python preferred)
· Experience with REST API testing and the relevant testing tools.
· Understanding the differences between JSON, YAML, and XML.
· Experienced with CI/CD.
· Experience testing web applications and different types of testing approaches and test environments.
· Experience in test risk management.
· Experience in using GIT repositories would be great.
· A passion for software product quality assurance with a positive mindset and good communication skills.
· Keen eye for detail.
Good to Have
· ISTQB foundation level certification
· Familiar with other automation testing tools like Selenium and Katalon Studio
· Suggest / make framework improvements to accommodate the scripting needs
· Implemented automation framework using Behavior-Driven Development (BDD) approach
· Familiarity with the Test Pyramid would be great.
· Experience working with JIRA
Non-technical requirements:
● You are available to join us in our Bangalore office from Day 1.
● You have strong communication skills.
● You have strong analytical skills.
● You are customer friendly and service minded.
● You are a team player.

Similar jobs


Preferred Skills:
- Experience with XML-based web services (SOAP, REST).
- Knowledge of database technologies (SQL, NoSQL) for XML data storage.
- Familiarity with version control systems (Git, SVN).
- Understanding of JSON and other data interchange formats.
- Certifications in XML technologies are a plus.
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus

Clinical Trials are the biggest bottleneck in bringing new drugs, devices, and vaccines to patients. On average, getting a new drug through the trial process takes nearly a decade and frequently costs $1B+. To make it worse, the process is inflicted with a great number of transparency issues. We are aiming to solve this through technology and platformization of clinical trials. We develop and offer next-generation technology platforms to pharmaceutical and biotech companies for running their clinical trials by integrating the entire process in an end-to-end workflow. Since Day 1, our vision is to make clinical trials faster, seamless, accessible to patients, and more transparent. We are driven by our technology-first approach to reduce inefficiency and increase patient-centricity in Clinical Trials. Founded by IIT Roorkee Alumni, Triomics is backed by top investors such as Y Combinator, Nexus Venture Partners, and General Catalyst.
Responsibilities:
1. Writing clean, modular, scalable, and reusable code with well-maintained documentation.
2. Working closely with the founding team to come up with product implementation architecture.
3. Designing and Implementing APIs while closely collaborating with front-end developers.
4. Implementing a stable and automated DevOps pipeline with state-of-the-art cloud services.
5. Developing database schemas and routinely evaluating them as per product requirements.
6. Maintaining a high coverage of testing by implementing Unit as well as Integration tests.
Tech Stack:
Our tech stack includes Python, Django, PostgreSQL for the backend, and ReactJS for the frontend.
We also use Celery and Redis for scheduling and multiple AWS services combined with docker for
deployment.
Requirements:
1. Bachelors in Computer Science or related field with 1-6 years of experience
2. Have implemented and deployed at least 1 or 2 production-level applications in the past.
3. Strong experience with Python (Django) as well as REST APIs.
4. Comfortable working with SQL Databases (Preferably PostgreSQL)
5. Experience with DevOps - Docker or Kubernetes, CI/CD pipelines, and AWS.
6. Prior experience of working in an early-stage startup environment is a plus.
Benefits:
1. Competitive CTC: 10-30% hike in the fixed component from your last or current salary +
ESOPs ( 10 - 25L)
2. Rent-Free accommodation in Gurugram
3. Flexible paid time off for full-time employees & paid leave for new parents


Requirements
-
Previous working experience as a Python Web Developer for min 2 years.
-
Hands-on experience with Django, Flask, or other Python frameworks.
-
Experience with Design implementation of objects and data models.
-
Working knowledge in JavaScript, HTML5, CSS3.
-
Understanding of CI/CD practices.
-
Familiarity with some ORM (Object Relational Mapper) libraries.
-
Proficient understanding of code versioning tools such as Git, Mercurial, or SVN.
-
Hands-on experience with complex scripts in Python to capture business logic and automate system management tasks.

Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team

DemandMatrix Inc. is a data company that provides Go To Market, Operations and Data Science teams with high quality company level data and intelligence. DemandMatrix uses advanced data science methodologies to process millions of unstructured data points that produce reliable and accurate technology intelligence, organizational intent and B2B spend data. Our product solves challenging problems for our clients such as Microsoft, DocuSign, Leadspace and many more.
Job Description
We use machine learning and narrow-AI to find companies and the products they are using. This is done by researching millions of publicly available sources and over a billion documents per month. We are looking for Tech Lead who loves tech challenges and is a problem solver. This will give you an opportunity to brainstorm ideas and implement solutions from scratch.
What will you do?
Will be part of the team responsible for our product roadmap.
You will be involved in rapid prototyping and quick roll-outs of ideas, in fast-paced environments working alongside some of the most talented and smartest people in the industry.
Lead a team and deliver the best output in an agile environment.
Communicate effectively, both orally and in writing with a globally distributed team.
Who Are You?
Designed and built multiple web services and data pipelines with a large volume.
Genuinely excited about technology and worked on projects from scratch.
A highly-motivated individual who thrives in an environment where problems are open-ended.
Must have
- 7+ years of hands-on experience in Software Development with a focus on microservices and data pipelines.
- Minimum 3 years of experience to build services and pipelines using Python.
- Minimum 1 year of experience with a large volume of data using MongoDB.
- Minimum 1 year of hands-on experience with big data pipelines and data warehouse.
- Experience with designing, building & deploying scalable & high available systems with AWS or GCP.
- Experience with Java
- Experience with Docker / Kubernetes
- Exposure to React.js for front-end development
Additional Information
- Flexible Working hours
- Entire Work From Home
- Birthday Leave
- Remote Work







