
Tech Lead (System Requirement and Data Management Lead)
at Our client is a fast-growing European passenger car company.
We are looking for a System Engineer who can manage requirements and data management in Rational Doors and Siemens Polarion. You will be part of a global development team with resources in China, Sweden and the US.
Responsibilities and tasks
- Import of requirement specifications to DOORS module
- Create module structure according to written specification (e-mail, word, etc)
- Format: reqif, word, excel, pdf, csv
- Make adjustments to data required to be able to import to tool
- Review that the result is readable and possible to work with
- Import of information to new or existing modules in DOORS
- Feedback of Compliance status from an excel compliance matrix to a module in DOORS
- Import requirements from one module to another based on baseline/filter…
- Import lists of items: Test cases, documents, etc in excel or csv to a module
- Provide guidance on format to information holder at client
- Link information/attribute data from one module to others
- Status, test results, comment
- Link requirements according to information from the client in any given format
- Export data and reports
- Assemble report based on data from one or several modules according to filters/baseline/written requests in any given format
- Export statistics from data in DOORS modules
- Create filters in DOORS modules
Note: Polarion activities same as DOORS activities, but process, results and structure may vary
Requirements – Must list (short, and real must, no order)
- =>10 years of overall experience in Automotive Industry
- Having requirement management experience in the automotive industry.
- =>3 years of experience in Rational Doors as user
- Knowledge in Siemens Polarion, working knowledge is a plus
- Experience in offshore delivery for more than 7 years
- Able to lead a team of 3 to 5 people and manage temporary additions to team
- Having working knowledge in ASPICE and handling requirements according to ASPICE L2
- Experience in setting up offshore delivery that best fits the expectations of the customer
- Experience in setting up quality processes and ways of working
- Experience in metrics management – propose, capture and share metrics with internal/ external stakeholders
- Good Communication skills in English
Requirements - Good to have list, strictly sorted in falling priority order
- Experience in DevOps framework of delivery
- Interest in learning new languages
- Handling requirements according to ASPICE L3
- Willingness in travel, travel to Sweden may be needed (approx. 1-2 per year)
Soft skills
- Candidate must be driving and proactive person, able to work with minimum supervision and will be asked to give example situations incoming interviews.
- Good team player with attention to detail, self-disciplined, able to manage their own time and workload, proactive and motivated.
- Strong sense of responsibility and commitment, innovative thinking.

Similar jobs
Branch Overview
Branch delivers world-class financial services to the mobile generation. With offices in the United States, Nigeria, Kenya, and India, Branch is a for-profit socially conscious company that uses the power of data science to reduce the cost of delivering financial services in emerging markets. We believe that everyone everywhere deserves fair financial access. The rapid spread of smartphones presents an opportunity for the world’s emerging middle class to access banking options and achieve financial flexibility.
Branch’s mission-driven team is led by the founder and former CEO of Kiva.org. The company presents a rich opportunity for our team members to drive meaningful growth in rapidly evolving and changing markets. In 2019, Branch announced its Series C and has garnered more than $100M in funding with investments from leading Silicon Valley firms including Andreessen Horowitz, Trinity Capital, Foundation Capital, Visa, and the International Finance Corporation (IFC).
We value diversity and are committed to providing an inclusive working environment where human beings of all backgrounds can thrive.
Job Overview
Branch is seeking a seasoned DevOps Engineer to own parts of our cloud infrastructure and DevOps operations. In this role, you will lead by example, and design, deploy, and optimize our AWS-based infrastructure, ensuring seamless orchestration of workloads across Kubernetes and serverless environments like AWS Lambda.
You will play a pivotal role in automating processes, enhancing system reliability, and driving the adoption of DevOps best practices. Collaborating closely with our Engineering, Product, and Data teams, you’ll contribute to scaling our infrastructure and supporting our rapid growth. This position offers a unique opportunity to refine your technical expertise in a dynamic and fast-paced environment.
Responsibilities
- Own and drive the architecture, design, and scaling of various parts of our cloud infrastructure on AWS, ensuring security, resilience, and cost efficiency
- Optimize Kubernetes clusters, including advanced scheduling, networking, and security enhancements to support mission-critical workloads
- Architect and improve CI/CD pipelines, incorporating automation, canary deployments, and rollback strategies for seamless releases
- Design and implement monitoring, logging, and observability solutions to ensure proactive issue detection and system performance tuning at scale
- Establish and enforce security best practices, including IAM governance, secret management, and compliance frameworks
- Be the go-to expert for multiple infrastructure components, providing technical leadership and driving improvements across interconnected systems
- Lead large-scale projects spanning multiple quarters, defining roadmaps, tracking progress, and ensuring timely execution with minimal supervision
- Drive collaboration with cross-functional teams, including ML, Data, and Product, to align infrastructure solutions with business and engineering goals
- Mentor and support junior and mid-level engineers, fostering a culture of continuous learning, technical excellence, and best practices
- Set and refine DevOps standards, driving automation, scalability, and system reliability across the organization
Qualifications
- A minimum of 7 years of experience in DevOps, SRE, or a similar role, with expertise in designing and managing large-scale cloud infrastructure
- Experience working on software product development, with proficiency in a mainstream stack.
- Deep hands-on experience with AWS services such as EC2, S3, RDS, Lambda, ECS, EKS, and VPC networking
- Advanced proficiency in Terraform for infrastructure as code, including best practices for scaling and managing cloud resources
- Strong expertise in Kubernetes, including cluster provisioning, networking, security hardening, and performance optimization
- Proficiency in scripting and automation using Python, Bash, or Go, with experience integrating APIs and optimizing workflows
- Experience designing and maintaining CI/CD pipelines using tools like CircleCI, Jenkins, GitLab CI/CD, or ArgoCD
- Strong knowledge of monitoring, logging, and observability tools such as DataDog, Prometheus, Grafana, and AWS CloudWatch
- Deep understanding of cloud security, IAM governance, role-based access control (RBAC), and compliance frameworks like SOC2 or ISO 27001
- Proven ability to lead and mentor junior engineers while fostering a collaborative and high-performance team culture
- Excellent communication skills, with the ability to work effectively across cultures, functions, and time zones in a globally distributed team
Benefits of Joining
- Be part of a mission-driven, fast-paced, and entrepreneurial environment that fosters innovation and impact
- Receive a competitive salary and equity package, reflecting your value and contributions
- Thrive in a collaborative and flat company culture that encourages open communication and idea-sharing
- Enjoy the flexibility of a remote-first work setup, with opportunities for occasional in-person collaboration
- Benefit from fully-paid Health Insurance to support your well-being
- Work-life balance is not a myth. Take advantage of paid time off, including personal leave, bereavement leave, and sick leave
- Access fully paid parental leave, with 6 months of maternity leave and 3 months of paternity leave
- Leverage annual professional development budget to upskill and advance your career
- Participate in discretionary trips to our offices around the globe, supported by global travel medical insurance
- Enjoy team meals and social events, both virtual and in-person, to connect and bond with colleagues
THE POSITION
Ashnik is looking for experienced technology consultant to work in DevOps team in pre-sales function. The primary area of focus will be Microservices, CI/CD pipeline, Docker, Kubernetes, containerization and container security. A person in this role will be responsible for leading technical discussions with customers and partners and helping them arrive at final solution.
QUALIFICATION AND EXPERIENCE
- Engineering or equivalent degree
- Must have at least 8 years of experience in IT industry designing and delivering solutions
- Must have at least 3 years of hand-on experience of Linux and Operating system
- Must have at least 3 years of experience of working in an environment with highly virtualized or cloud based infrastructure
- Must have at least 2 years hands on experience in CI/CD pipeline, micro-services, containerization and kubernetes
- Though coding is not needed in this role, the person should have ability to understand and debug code if required
- Should be able to explain complex solution in simpler ways
- Should be ready to travel 20-40% in a month
- Should be able to engage with customers to understand the fundamental/driving requirement
DESIRED SKILLS
- Past experience of working with Docker and/or Kubernetes at Scale
- Past experience of working in a DevOps team
- Prior experience in Pre-sales role
RESPONSIBILITIES
- Own pre-sales or sales engineering responsibility to design, present, deliver technical solution
- Be the point of contact for all technical queries for Sales team and partners
- Build full-fledged solution proposals with details of implementation and scope of work for customers
- Contribute in technical writings through blogs, whitepapers, solution demos
- Make presentations at events and participate in events.
- Conduct customer workshops to educate them about features in Docker Enterprise Edition
- Coordinate technical escalations with principal vendor
- Get an understanding of various other components and considerations involved in the areas mentioned above
- Be able to articulate value of technology vendor’s products that Ashnik is partnering with eg. Docker, Sysdig, Hashicorp, Ansible, Jenkins etc.
- Work with partners and sales team for responding to RFPs and tenders
WHAT IS IN IT FOR YOU?
You would be adding a great experience of working with a leading open source solutions company in South East Asia region to your career. You would get to learn from the leaders and grow in the industry. This would be a great opportunity for you to grow in your career through continuous learning, adding depth and breadth of technologies. Since we work with leading open source technologies and engage with large enterprises, it creates enormous possibilities for career growth for our team. Not to mention that our people find the journey with Ashnik to be exciting and a fulfilling experience.
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
JOB DETAILS
What You'll Do
About Hop:
We are a London, UK based FinTech startup with a subsidiary in India. Hop is working towards building the next generation digital banking platform for seamless and economical currency exchange, with technology at the crux of it. In a technology driven era, many financial services platforms still lack the customer experience and are cumbersome to use. Hop aims at building a ‘state of the art’ tech-centric, customer focused solution.
moneyHOP is India’s first cross-border neo-bank providing millennials the ability to ‘Send’ & ‘Spend’ conveniently and economically across the globe using HOPRemit (An online remittance portal) and HOP app + Card (A multi-currency bank account).
This position is a crucially important position in the firm and the person hired will have the liberty to drive the product and provide direction in line with business needs.
Website: https://moneyhop.co/">https://moneyhop.co/
About Individual
Looking for an enthusiastic individual who is passionate about technology and has worked with either a start-up or a blue-chip firm in the past.
The candidate needs to be a multi-tasker, highly self-motivated, self-starter and have the ability to work in a high stress environment. He/she should be tech savvy and willing to embrace new technology comfortably.
Ideally, the candidate should have experience working with the technology stack in the scalable and high growth mobile application software.
General Skills
- 3-4 years of experience in DevOps.
- Bachelor's degree in Computer Science, Information Science, or equivalent practical experience.
- Exposure to Behaviour Driven Development and experience in programming and testing.
- Excellent verbal and written communication skills.
- Good time management and organizational skills.
- Dependability
- Accountability and Ownership
- Right attitude and growth mindset
- Trust-worthiness
- Ability to embrace new technologies
- Ability to get work done
- Should have excellent analytical and troubleshooting skills.
Technical Skills
- Work with developer teams with a focus on automating build and deployment using tools such as Jenkins.
- Implement CI/CD in projects (GitLabCI preferred).
- Enable software build and deploy.
- Provisioning both day to day operations and automation using tools, e. g. Ansible, Bash.
- Write, plan, create infra as a code using Terraform.
- Monitoring, ITSM automation incident creation from alerts using licensed and open source tools.
- Manage credentials for AWS cloud servers, github repos, Atlassian Cloud services, Jenkins, OpenVPN, and the developers environment.
- Building environments for unit tests, integration tests, system tests, and acceptance tests using Jenkins.
- Create and spin off resource instances.
- Experience implementing CI/CD.
- Experience with infrastructure automation solutions (Ansible, Chef, Puppet, etc. ).
- Experience with AWS.
- Should have expert Linux and Network administration skills to troubleshoot and trace symptoms back to the root cause.
- Knowledge of application clustering / load balancing concepts and technologies.
- Demonstrated ability to think strategically about developing solution strategies, and deliver results.
- Good understanding of design of native Cloud applications Cloud application design patterns and practices in AWS.
Day-to-Day requirements
- Work with the developer team to enhance the existing CI/CD pipeline.
- Adopt industry best practices to set up a UAT and prod environment for scalability.
- Manage the AWS resources including IAM users, access control, billing etc.
- Work with the test automation engineer to establish a CI/CD pipeline.
- Work on replication of environments easy to implement.
- Enable efficient software deployment.
- Proficient in Java, Node or Python
- Experience with NewRelic, Splunk, SignalFx, DataDog etc.
- Monitoring and alerting experience
- Full stack development experience
- Hands-on with building and deploying micro services in Cloud (AWS/Azure)
- Experience with terraform w.r.t Infrastructure As Code
- Should have experience troubleshooting live production systems using monitoring/log analytics tools
- Should have experience leading a team (2 or more engineers)
- Experienced using Jenkins or similar deployment pipeline tools
- Understanding of distributed architectures
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology
DevOps Consultant!! MERN Stack Project Manager – Systems (Enterprise or Solutions) Architect needed!
Hello superstar,
I appreciate you taking time to read this. I have posted a job for developers to work on a start-up, the link is ......
I would need someone with DevOps experience, to ensure that the project is undertaken with the highest standards possible. I have had many experiences where ‘completed’ software after years of development was filled with bugs and it would be more cost-effective to start from scratch than to attempt to find and correct all the bugs.
I have attempted to learn as much as possible, but I now have an opportunity and it would better serve the venture to have someone handle the management of the project to ensure that;
- We choose the most appropriate technology
- We choose competent developers in those technologies
- The architecture and data modeling are clearly defined in a ‘blueprint’ plan
- A DevOps environment and processes are set up and the developers understand what is required
- Proper tests are carried out to ensure everything works as intended
- There are processes for testers to follow and competent testers are selected to follow them
- Accessibility, localization, and internationalization are planned ahead of time
- Security, scalability, and other future probabilities that I may not even be aware of are considered and planned ahead of time
- Documentation and code reviews, refactoring and other quality assurance processes are undertaken
- Working software is produced and systems that enable new developers or teams of people to easily take over and/or contribute new modules or updates in a controlled and organized fashion
- Cost estimates or budgets/projections or use of SaaS, hosting and other 3rd party services and applications
I am more concerned with a professional and world-class organizational system than with any particular type of software been produced as the strong foundation will enable anything to be creating with efficacy and precision.
Again, thank you for reading this, please reply with the word “superstar” anywhere in the second line of your response. I look forward to hearing from you.
Warm wishes DevOps Evangelist,

PRAXINFO Hiring DevOps Engineer.
Position : DevOps Engineer
Job Location : C.G.Road, Ahmedabad
EXP : 1-3 Years
Salary : 40K - 50K
Required skills:
⦿ Good understanding of cloud infrastructure (AWS, GCP etc)
⦿ Hands on with Docker, Kubernetes or ECS
⦿ Ideally strong Linux background (RHCSA , RHCE)
⦿ Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc)
⦿ Microservice architectures
⦿ Experience with distributed systems and highly scalable systems
⦿ Demonstrated history in automating operations processes via services and tools ( Puppet, Ansible etc)
⦿ Systematic problem-solving approach coupled with a strong sense of ownership and drive.
If anyone is interested than share your resume at hiring at praxinfo dot com!
#linux #devops #engineer #kubernetes #docker #containerization #python #shellscripting #git #jenkins #maven #ant #aws #RHCE #puppet #ansible

