50+ DevOps Jobs in Bangalore (Bengaluru) | DevOps Job openings in Bangalore (Bengaluru)
Apply to 50+ DevOps Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest DevOps Job opportunities across top companies like Google, Amazon & Adobe.
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
We are currently seeking skilled and motivated Senior Java Developers to join our dynamic and innovative development team. As a Senior Java Developer, you will be responsible for designing, developing, and maintaining high-performance, scalable Java applications.
Join DataCaliper and step into the vanguard of technological advancement, where your proficiency will shape the landscape of data management and drive businesses toward unparalleled success.
Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.
Company: Data caliper
Work location: Coimbatore
Experience: 3+ years
Joining time: Immediate – 4 weeks
Required skills:
-Good experience in Java/J2EE programming frameworks like Spring (Spring MVC, Spring Security, Spring JPA, Spring Boot, Spring Batch, Spring AOP).
-Deep knowledge in developing enterprise web applications using Java Spring
-Good experience in REST webservices.
-Understanding of DevOps processes like CI/CD
-Exposure to Maven, Jenkins, GIT, data formats json /xml, Quartz, log4j, logback
-Good experience in database technologies / SQL / PLSQL or any database experience
-The candidate should have excellent communication skills with an ability to interact with non-technical stakeholders as well.
Thank you
Client is excelled in providing top-notch business solutions to industries such as Ecommerce, Marketing, Banking and Finance, Insurance, Transport and many more. For a generation that is driven by data, Insights & decision making, we help businesses to make the best possible use of data and enable businesses to thrive in this competitive space. Our expertise spans across Data, Analytics and Engineering to name a few.
Experience to enhance our cloud infrastructure and optimize application performance.
- Bachelor’s degree in Computer Science or related field.
- 5+ years of DevOps experience with strong scripting skills (shell, Python, Ruby).
- Familiarity with open-source technologies and application development methodologies.
- Experience in optimizing both stand-alone and distributed systems.
Key Responsiblities
- Design and maintain DevOps practices for seamless application deployment.
- Utilize AWS tools (EBS, S3, EC2) and automation technologies (Ansible, Terraform).
- Manage Docker containers and Kubernetes environments.
- Implement CI/CD pipelines with tools like Jenkins and GitLab.
- Use monitoring tools (Datadog, Prometheus) for system reliability.
- Collaborate effectively across teams and articulate technical choices.
at Connect IO
RedHat OpenShift (L2/L3 Expetise)
1. Setup OpenShift Ingress Controller (And Deploy Multiple Ingress)
2. Setup OpenShift Image Registry
3. Very good knowledge of OpenShift Management Console to help the application teams to manage their pods and troubleshooting.
4. Expertise in deployment of artifacts to OpenShift cluster and configure customized scaling capabilities
5. Knowledge of Logging of PODS in OpenShift Cluster for troubleshooting.
2. Architect:
- Suggestions on architecture setup
- Validate architecture and let us know pros and cons and feasibility.
- Managing of Multi Location Sharded Architecture
- Multi Region Sharding setup
3. Application DBA:
- Validate and help with Sharding decisions at collection level
- Providing deep analysis on performance by looking at execution plans
- Index Suggestions
- Archival Suggestions and Options
4. Collaboration
Ability to plan and delegate work by providing specific instructions.
at Scoutflo
Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.
Job Description:
- In-depth knowledge of full-stack development principles and best practices.
- Expertise in building web applications with strong proficiency in languages like
- Node.js, React, and Go.
- Experience developing and consuming RESTful & gRPC API Protocols.
- Familiarity with CI/CD workflows and DevOps processes.
- Solid understanding of cloud platforms and container orchestration
- technologies
- Experience with Kubernetes pipelines and workflows using tools like Argo CD.
- Experience with designing and building user-friendly interfaces.
- Excellent understanding of distributed systems, databases, and APIs.
- A passion for writing clean, maintainable, and well-documented code.
- Strong problem-solving skills and the ability to work independently as well as
- collaboratively.
- Excellent communication and interpersonal skills.
- Experience with building self-serve platforms or user onboarding experiences.
- Familiarity with Infrastructure as Code (IaC) tools like Terraform.
- A strong understanding of security best practices for Kubernetes deployments.
- Grasp on setting up Network Architecture for distributed systems.
Must have:
1) Experience with managing Infrastructure on AWS/GCP or Azure
2) Managed Infrastructure on Kubernetes
Job Title: Devops+Java Engineer
Location: Bangalore
Mode of work- Hybrid (3 days work from office)
Job Summary: We are looking for a skilled Java+ DevOps Engineer to help enhance and maintain our infrastructure and applications. The ideal candidate will have a strong background in Java development combined with expertise in DevOps practices, ensuring seamless integration and deployment of software solutions. You will collaborate with cross-functional teams to design, develop, and deploy robust and scalable solutions.
Key Responsibilities:
- Develop and maintain Java-based applications and microservices.
- Implement CI/CD pipelines to automate the deployment process.
- Design and deploy monitoring, logging, and alerting systems.
- Manage cloud infrastructure using tools such as AWS, Azure, or GCP.
- Ensure security best practices are followed throughout all stages of development and deployment.
- Troubleshoot and resolve issues in development, test, and production environments.
- Collaborate with software engineers, QA analysts, and product teams to deliver high-quality solutions.
- Stay current with industry trends and best practices in Java development and DevOps.
Required Skills and Experience:
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Proficient in Java programming language and frameworks (Spring, Hibernate, etc.).
- Strong understanding of DevOps principles and experience with DevOps tools (e.g., Jenkins, Git, Docker, Kubernetes).
- Knowledge of containerization and orchestration technologies (Docker, Kubernetes).
- Familiarity with monitoring and logging tools (ELK stack, Prometheus, Grafana).
- Solid understanding of CI/CD pipelines and automated testing frameworks.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Responsibilities:
- Design, develop, and implement robust and efficient backend services using microservices architecture principles.
- Write clean, maintainable, and well-documented code using C# and the .NET framework.
- Develop and implement data access layers using Entity Framework.
- Utilize Azure DevOps for version control, continuous integration, and continuous delivery (CI/CD) pipelines.
- Design and manage databases on Azure SQL.
- Perform code reviews and participate in pair programming to ensure code quality.
- Troubleshoot and debug complex backend issues.
- Optimize backend performance and scalability to ensure a smooth user experience.
- Stay up-to-date with the latest advancements in backend technologies and cloud platforms.
- Collaborate effectively with frontend developers, product managers, and other stakeholders.
- Clearly communicate technical concepts to both technical and non-technical audiences.
Qualifications:
- Strong understanding of microservices architecture principles and best practices.
- In-depth knowledge of C# programming language and the .NET framework (ASP.NET MVC/Core, Web API).
- Experience working with Entity Framework for data access.
- Proficiency with Azure DevOps for CI/CD pipelines and version control (Git).
- Experience with Azure SQL for database design and management.
- Experience with unit testing and integration testing methodologies.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong written and verbal communication skills.
- A passion for building high-quality, scalable, and secure software applications.
at CodeCraft Technologies Private Limited
Position: SRE/ DevOps
Experience: 6-10 Years
Location: Bengaluru/Mangalore
CodeCraft Technologies is a multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms.
We are seeking a highly skilled and motivated Site Reliability Engineer (SRE) to join our dynamic team. As an SRE, you will play a crucial role in ensuring the reliability, availability, and performance of our systems and applications. You will work closely with the development team to build and maintain scalable infrastructure, implement best practices in CI/CD, and contribute to the overall stability of our technology stack.
Roles and Responsibilities:
· CI/CD and DevOps:
o Implement and maintain robust Continuous Integration/Continuous Deployment (CI/CD) pipelines to ensure efficient and reliable software delivery.
o Collaborate with development teams to integrate DevOps principles into the software development lifecycle.
o Experience with pipelines such as Github actions, GitLab, Azure DevOps,CircleCI is a plus.
· Test Automation:
o Develop and maintain automated testing frameworks to validate system functionality, performance, and reliability.
o Collaborate with QA teams to enhance test coverage and improve overall testing efficiency.
· Logging/Monitoring:
o Design, implement, and manage logging and monitoring solutions to proactively identify and address potential issues.
o Respond to incidents and alerts to ensure system uptime and performance.
· Infrastructure as Code (IaC):
o Utilize Terraform (or other tools) to define and manage infrastructure as code, ensuring scalability, security, and consistency across environments.
· Elastic Stack:
o Implement and manage Elastic Stack (ELK) for log and data analysis to gain insights into system performance and troubleshoot issues effectively.
· Cloud Platforms:
o Work with cloud platforms such as AWS, GCP, and Azure to deploy and manage scalable and resilient infrastructure.
o Optimize cloud resources for cost efficiency and performance.
· Vulnerability Management:
o Conduct regular vulnerability assessments and implement measures to address and remediate identified vulnerabilities.
o Collaborate with security teams to ensure a robust security posture.
· Security Assessment:
o Perform security assessments and audits to identify and address potential security risks.
o Implement security best practices and stay current with industry trends and emerging threats.
o Experience with tools such as GCP Security Command Center, and AWS Security Hub is a plus.
· Third-Party Hardware Providers:
o Collaborate with third-party hardware providers to integrate and support hardware components within the infrastructure.
Desired Profile:
· The candidate should be willing to work in the EST time zone, i.e. from 6 PM to 2 AM.
· Excellent communication and interpersonal skills
· Bachelor’s Degree
· Certifications related to this field shall be an added advantage.
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years
Opportunity to work on Product Development
The Technical Project Manager is responsible for managing projects to make sure the proposed plan adheres to the timeline, budget, and scope. Their duties include planning projects in detail, setting schedules for all stakeholders, and executing each step of the project for our proprietary product, with some of the World’s biggest brands across the BFSI domain. The role is cross-functional and requires the individual to own and push through projects that touch upon business, operations, technology, marketing, and client experience.
• 5-7 years of experience in technical project management.
• Professional Project Management Certification from accredited intuition is mandatory.
• Proven experience overseeing all elements of the project/product lifecycle.
• Working knowledge of Agile and Waterfall methodologies.
• Prior experience in Fintech, Blockchain, and/or BFSI domain will be an added advantage.
• Demonstrated understanding of Project Management processes, strategies, and methods.
• Strong sense of personal accountability regarding decision-making and supervising department team.
• Collaborate with cross-functional teams and stakeholders to define project requirements and scope.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.
Responsibilities: • Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. • Develop and maintain data-oriented scripting using languages such as Python. • Create and manage data structures to ensure efficient and accurate data storage and retrieval. • Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. • Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. • Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. • Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. • Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. • Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.
Requirements: • A minimum of 6 years of relevant experience as a Data Engineer. • Proficiency in ETL, SQL, and other advanced data engineering techniques. • Strong programming skills in scripting languages such as Python. • Experience in creating and maintaining data structures for efficient data storage and retrieval. • Familiarity with cloud and big data technologies, specifically AWS and Azure stack. • Hands-on experience with ETL tools, particularly Nifi and Tibco. • In-depth knowledge of database structures, including MSSQL and Vertica. • Proven experience in managing and operating data platforms. • Strong problem-solving and analytical skills with the ability to handle complex data challenges. • Excellent communication and collaboration skills to work effectively in a team environment. • Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.
We are 17-year-old Multinational Company headquartered in Ba
About The Company
The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide.
Join us as a Senior Software Engineer within our Web Application Development team, based out of Pune to deliver end-to-end customized application development.
We expect you to participate & contribute to every stage of project right from interacting with internal customers/stakeholders, understanding their requirements, and proposing them the solutions which will be best fit to their expectations. You will be part of local team you will have chance to be part of Global Projects delivery with the possibility of working On-site (Belgium) if required.You will be most important member of highly motivated Application development team leading the Microsoft Technology stack enabling the team members to deliver “first time right” application delivery.
Principal Duties and Responsibilities
• You will be responsible for the technical analysis of requirements and lead the project from Technical perspective
• You should be a problem solver and provide scalable and efficient technical solutions
• You guarantee an excellent and scalable application development in an estimated timeline
• You will interact with the customers/stakeholders and understand their requirements and propose the solutions
• You will work closely with the ‘Application Owner’ and carry the entire responsibility of end-to-end processes/development
• You will make technical & functional application documentation, release notes that will facilitate the aftercare of the application Knowledge, Skills and Qualifications
• Education: Master’s degree in computer science or equivalent
• Experience: Minimum 5- 10 years
Required Skills
• Strong working knowledge of C#, Angular 2+, SQL Server, ASP.Net Web API
• Good understanding on OOPS, SOLID principals, Development practices
• Good understanding of DevOps, Git, CI/CD
• Experience with development of client and server-side applications
• Excellent English communication skills (written, oral), with good listening capabilities
• Exceptionally good Excellent technical analytical, debugging, and problem-solving skills
• Has a reasonable balance between getting the job done vs technical debt
• Enjoys producing top quality code in a fast-moving environment
• Effective team player working in a team; willingness to put the needs of the team over their own
Preferred Skills
• Experience with product development for the Microsoft Azure platform
• Experience with product development life cycle would be a plus
• Experience with agile development methodology (Scrum)
• Functional analysis skills and experience (Use cases, UML) is an asset
About Apexon:
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. For over 17 years, Apexon has been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving our clients’ toughest technology problems, and a commitment to continuous improvement. We focus on three broad areas of digital services: User Experience (UI/UX, Commerce); Engineering (QE/Automation, Cloud, Product/Platform); and Data (Foundation, Analytics, and AI/ML), and have deep expertise in BFSI, healthcare, and life sciences.
Apexon is backed by Goldman Sachs Asset Management and Everstone Capital.
To know more about us please visit: https://www.apexon.com/" target="_blank">https://www.apexon.com/
Responsibilities:
- C# Automation engineer with 4-6 years of experience to join our engineering team and help us develop and maintain various software/utilities products.
- Good object-oriented programming concepts and practical knowledge.
- Strong programming skills in C# are required.
- Good knowledge of C# Automation is preferred.
- Good to have experience with the Robot framework.
- Must have knowledge of API (REST APIs), and database (SQL) with the ability to write efficient queries.
- Good to have knowledge of Azure cloud.
- Take end-to-end ownership of test automation development, execution and delivery.
Good to have:
- Experience in tools like SharePoint, Azure DevOps
.
Other skills:
- Strong analytical & logical thinking skills. Ability to think and act rationally when faced with challenges.
Job Purpose :
Working with the Tech Data Sales Team, the Presales Consultant is responsible for providing presales technical support to the Sales team and presenting tailored demonstrations or qualification discussions to customers and/or prospects. The Presales Consultant also assists the Sales Team with qualifying opportunities - in or out and helping expand existing opportunities through solid questioning. The Presales Consultant will be responsible on conducting Technical Proof of Concept, Demonstration & Presentation on the supported products & solution.
Responsibilities :
- Subject Matter Expert (SME) in the development of Microsoft Cloud Solutions (Compute, Storage, Containers, Automation, DevOps, Web applications, Power Apps etc.)
- Collaborate and align with business leads to understand their business requirement and growth initiatives to propose the required solutions for Cloud and Hybrid Cloud
- Work with other technology vendors, ISVs to build solutions use cases in the Center of Excellence based on sales demand (opportunities, emerging trends)
- Manage the APJ COE environment and Click-to-Run Solutions
- Provide solution proposal and pre-sales technical support for sales opportunities by identifying the requirements and design Hybrid Cloud solutions
- Create Solutions Play and blueprint to effectively explain and articulate solution use cases to internal TD Sales, Pre-sales and partners community
- Support in-country (APJ countries) Presales Team for any technical related enquiries
- Support Country's Product / Channel Sales Team in prospecting new opportunities in Cloud & Hybrid Cloud
- Provide technical and sales trainings to TD sales, pre-sales and partners.
- Lead & Conduct solution presentations and demonstrations
- Deliver presentations at Tech Data, Partner or Vendor led solutions events.
- Achieve relevant product certifications
- Conduct customer workshops that help accelerate sales opportunities
Knowledge, Skills and Experience :
- Bachelor's degree in information technology/Computer Science or equivalent experience certifications preferred
- Minimum of 7 years relevant working experience, ideally in IT multinational environment
- Track record on the assigned line cards experience is an added advantage
- IT Distributor and/or SI experience would also be an added advantage
- Has good communication skills and problem solving skills
- Proven ability to work independently, effectively in an off-site environment and under high pressure
What's In It For You?
- Elective Benefits: Our programs are tailored to your country to best accommodate your lifestyle.
- Grow Your Career: Accelerate your path to success (and keep up with the future) with formal programs on leadership and professional development, and many more on-demand courses.
- Elevate Your Personal Well-Being: Boost your financial, physical, and mental well-being through seminars, events, and our global Life Empowerment Assistance Program.
- Diversity, Equity & Inclusion: It's not just a phrase to us; valuing every voice is how we succeed. Join us in celebrating our global diversity through inclusive education, meaningful peer-to-peer conversations, and equitable growth and development opportunities.
- Make the Most of our Global Organization: Network with other new co-workers within your first 30 days through our onboarding program.
- Connect with Your Community: Participate in internal, peer-led inclusive communities and activities, including business resource groups, local volunteering events, and more environmental and social initiatives.
Don't meet every single requirement? Apply anyway.
At Tech Data, a TD SYNNEX Company, we're proud to be recognized as a great place to work and a leader in the promotion and practice of diversity, equity and inclusion. If you're excited about working for our company and believe you're a good fit for this role, we encourage you to apply. You may be exactly the person we're looking for!
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.
at CodeCraft Technologies Private Limited
Roles and Responsibilities:
• Gather and analyse cloud infrastructure requirements
• Automating system tasks and infrastructure using a scripting language (Shell/Python/Ruby
preferred), with configuration management tools (Ansible/ Puppet/Chef), service registry and
discovery tools (Consul and Vault, etc), infrastructure orchestration tools (Terraform,
CloudFormation), and automated imaging tools (Packer)
• Support existing infrastructure, analyse problem areas and come up with solutions
• An eye for monitoring – the candidate should be able to look at complex infrastructure and be
able to figure out what to monitor and how.
• Work along with the Engineering team to help out with Infrastructure / Network automation needs.
• Deploy infrastructure as code and automate as much as possible
• Manage a team of DevOps
Desired Profile:
• Understanding of provisioning of Bare Metal and Virtual Machines
• Working knowledge of Configuration management tools like Ansible/ Chef/ Puppet, Redfish.
• Experience in scripting languages like Ruby/ Python/ Shell Scripting
• Working knowledge of IP networking, VPN's, DNS, load balancing, firewalling & IPS concepts
• Strong Linux/Unix administration skills.
• Self-starter who can implement with minimal guidance
• Hands-on experience setting up CICD from SCRATCH in Jenkins
• Experience with Managing K8s infrastructure
- Public clouds, such as AWS, Azure, or Google Cloud Platform
- Automation technologies, such as Kubernetes or Jenkins
- Configuration management tools, such as Puppet or Chef
- Scripting languages, such as Python or Ruby
- Recommend a migration and consolidation strategy for DevOps tools
- Design and implement an Agile work management approach
- Make a quality strategy
- Design a secure development process
- Create a tool integration strategy
What is the role?
Expected to manage the product plan, engineering, and delivery of Xoxoday Plum. Plum is a rewarding and incentives infrastructure for businesses. It's a unified integrated suite of products to handle various rewarding use cases for consumers, sales, channel partners, and employees. 31% of the total tech team is aligned towards this product and comprises of 32 members within Plum Tech, Quality, Design, and Product management. The annual FY 2019-20 revenue for Plum was $ 40MN and is showing high growth potential this year as well. The product has a good mix of both domestic and international clientele and is expanding. The role will be based out of our head office in Bangalore, Karnataka however we are open to discuss the option of remote working with 25 - 50% travel.
Key Responsibilities
- Scope and lead technology with the right product and business metrics.
- Directly contribute to product development by writing code if required.
- Architect systems for scale and stability.
- Serve as a role model for our high engineering standards and bring consistency to the many codebases and processes you will encounter.
- Collaborate with stakeholders across disciplines like sales, customers, product, design, and customer success.
- Code reviews and feedback.
- Build simple solutions and designs over complex ones, and have a good intuition for what is lasting and scalable.
- Define a process for maintaining a healthy engineering culture ( Cadence for one-on-ones, meeting structures, HLDs, Best Practices In development, etc).
What are we looking for?
- Manage a senior tech team of more than 5 direct and 25 indirect developers.
- Should have experience in handling e-commerce applications at scale.
- Should have at least 7+ years of experience in software development, agile processes for international e-commerce businesses.
- Should be extremely hands-on, full-stack developer with modern architecture.
- Should exhibit skills to build a good engineering team and culture.
- Should be able to handle the chaos with product planning, prioritizing, customer-first approach.
- Technical proficiency
- JavaScript, SQL, NoSQL, PHP
- Frameworks like React, ReactNative, Node.js, GraphQL
- Databases technologies like ElasticSearch, Redis, MySql, Cassandra, MongoDB, Kafka
- Dev ops to manage and architect infra - AWS, CI/CD (Jenkins)
- System Architecture w.r.t Microservices, Cloud Development, DB Administration, Data Modeling
- Understanding of security principles and possible attacks and mitigate them.
Whom will you work with?
You will lead the Plum Engineering team and work in close conjunction with the Tech leads of Plum with some cross-functional stake with other products. You'll report to the co-founder directly.
What can you look for?
A wholesome opportunity in a fast-paced environment with scale, international flavour, backend, and frontend. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
Focussed on delivering scalable performant database platforms that underpin our customer data services in a dynamic and fast-moving agile engineering environment.
· Experience with different types of enterprise application databases (PostgreSQL a must)
· Familiar with developing in a Cloud environment (AWS RDS, DMS & DevOps highly desirable).
· Proficient in using SQL to interrogate, analyze and report on customer data and interactions on live systems and in testing environments.
· Proficient in using PostgreSQL PL/pgSQL
· Experienced in delivering deployments and infrastructure as code with automation tools such as Jenkins, Terraform, Ansible, etc.
· Comfortable using code hosting platforms for version control and collaboration. (git, github, etc)
· Exposed to and have an opportunity to master automation and learn to use technologies and tools like Oracle, PostgreSQL, AWS, Terraform, GitHub, Nexus, Jenkins, Packer, Bash Scripting, Python, Groovy, and Ansible
· Comfortable leading complex investigations into service failures and data abnormalities that touch your applications.
· Experience with Batch and ETL methodologies.
· Confident in making technical decisions and acting on them (within reason) when under pressure.
· Calm dealing with stakeholders and easily be able to translate complex technical scenarios to non-technical individuals.
· Managing incidents, problems, and change in line with best practice
· Expected to lead and inspire others in your team and department, drive engineering best practice and compliance, strategic direction, and encourage collaboration and transparency.
at Merck Group
The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).
The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:
• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required
This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.
Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
at Altimetrik
Senior .NET Cloud (Azure) Practitioner
Job Description Experience: 5-12 years (approx.)
Education: B-Tech/MCA
Mandatory Skills
- Strong Restful API, Micro-services development experience using ASP.NET CORE Web APIs (C#);
- Must have exceptionally good software design and programming skills in .Net Core (.NET 3.X, .NET 6) Platform, C#, ASP.net MVC, ASP.net Web API (RESTful), Entity Framework & LINQ
- Good working knowledge on Azure Functions, Docker, and containers
- Expertise in Microsoft Azure Platform - Azure Functions, Application Gateway, API Management, Redis Cache, App Services, Azure Kubernetes, CosmosDB, Azure Search, Azure Service Bus, Function Apps, Azure Storage Accounts, Azure KeyVault, Azure Log Analytics, Azure Active Directory, Application Insights, Azure SQL Database, Azure IoT, Azure Event Hubs, Azure Data Factory, Virtual Networks and networking.
- Strong SQL Server expertise and familiarity with Azure Cosmos DB, Azure (Blob, Table, queue) storage, Azure SQL etc
- Experienced in Test-Driven Development, unit testing libraries, testing frameworks.
- Good knowledge of Object Oriented programming, including Design Patterns
- Cloud Architecture - Technical knowledge and implementation experience using common cloud architecture, enabling components, and deployment platforms.
- Excellent written and oral communication skills, along with the proven ability to work as a team with other disciplines outside of engineering are a must
- Solid analytical, problem-solving and troubleshooting skills
Desirable Skills:
- Certified Azure Solution Architect Expert
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-900-exam-preparation-microsoft-azure-fundamentals-524%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760717910671%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TmO1fonUFFgb8LzUJHbL8IyOdQeUKdw6xHMM2asosiw%3D&reserved=0" target="_blank">Microsoft Certified: Azure – Fundamentals Exam AZ-900
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-104-exam-preparation-microsoft-azure-administrator-1-1332%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EQPg%2FXxTdiYUCKCAvItXy6TY89udGTIehQ0m9irkGRk%3D&reserved=0" target="_blank">Microsoft Certified: Azure Administrator – Associate Exam AZ-104
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-204-exam-preparation-developing-solutions-for-microsoft-azure-1208%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2BnosXa4TdBUG4jrqP%2B0%2FOikDbQMBNqzuDpvGoUk0IE8%3D&reserved=0" target="_blank">Microsoft Certified: Azure Developer – Associate Exam AZ-204
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Facloudguru.com%2Fblog%2Fengineering%2Fwhich-azure-certification-is-right-for-me%23devops-engineer&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=NZyP%2F3Euh1SkHfB7896ovm0HDt0vA8UgfHUvTBN4SPM%3D&reserved=0" target="_blank">Microsoft Certified: DevOps Engineer Expert (AZ-400)
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Facloudguru.com%2Fblog%2Fengineering%2Fwhich-azure-certification-is-right-for-me%23solutions-architect&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=0infaLzjjPThGGkzwu50jJXikppcC8trnsLGtAoB7S4%3D&reserved=0" target="_blank">Microsoft Certified: Azure Solutions Architect Expert (AZ-305)
- Good understanding of software architecture, scalability, resilience, performance;
- Working knowledge of automation tools such as Azure DevOps, Azure Pipeline or Jenkins or similar
Roles & Responsibilities
- Defining best practices & standards for usage of libraries, frameworks and other tools being used;
- Architecture, design, and implementation of software from development, delivery, and releases.
- Breakdown complex requirements into independent architectural components, modules, tasks and strategies and collaborate with peer leadership through the full software development lifecycle to deliver top quality, on time and within budget.
- Demonstrate excellent communications with stakeholders regarding delivery goals, objectives, deliverables, plans and status throughout the software development lifecycle.
- Should be able to work with various stakeholders (Architects/Product Owners/Leadership) as well as team - Lead/ Principal/ Individual Contributor for Web UI/ Front End Development;
- Should be able to work in an agile, dynamic team environment;
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 3 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
YOptima is a well capitalized digital startup pioneering full funnel marketing via programmatic media. YOptima is trusted by leading marketers and agencies in India and is expanding its footprint globally.
We are expanding our tech team and looking for a prolific Staff Engineer to lead our tech team as a leader (without necessarily being a people manager). Our tech is hosted on Google cloud and the stack includes React, Node.js, AirFlow, Python, Cloud SQL, BigQuery, TensorFlow.
If you have hands-on experience and passion for building and running scalable cloud-based platforms that change the lives of the customers globally and drive industry leadership, please read on.
- You have 6+ years of quality experience in building scalable digital products/platforms with experience in full stack development, big data analytics and Devops.
- You are great at identifying risks and opportunities, and have the depth that comes with willingness and capability to be hands-on. Do you still code? Do you love to code? Do you love to roll up your sleeves and debug things?
- Do you enjoy going deep into that part of the 'full stack' that you are not an expert of?
Responsibilities:
- You will help build a platform that supports large scale data, with multi-tenancy and near real-time analytics.
- You will lead and mentor a team of data engineers and full stack engineers to build the next generation data-driven marketing platform and solutions.
- You will lead exploring and building new tech and solutions that solve business problems of today and tomorrow.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science or equivalent discipline.
- Excellent computer systems fundamentals, DS/Algorithms and problem solving skills.
- Experience in conceiving, designing, architecting, developing and operating full stack, data-driven platforms using Big data and cloud tech in GCP/AWS environments.
What you get: Opportunity to build a global company. Amazing learning experience. Transparent work culture. Meaningful equity in the business.
At YOptima, we value people who are driven by a higher sense of responsibility, bias for action, transparency, persistence with adaptability, curiosity and humility. We believe that successful people have more failures than average people have attempts. And that success needs the creative mindset to deal with ambiguities when you start, the courage to handle rejections and failure and rise up, and the persistence and humility to iterate and course correct.
- We look for people who are initiative driven, and not interruption driven. The ones who challenge the status quo with humility and candor.
- We believe startup managers and leaders are great individual contributors too, and that there is no place for context free leadership.
- We believe that the curiosity and persistence to learn new skills and nuances, and to apply the smartness in different contexts matter more than just academic knowledge.
Location:
- Brookefield, Bangalore
- Jui Nagar, Navi Mumbai
-
Job Title - DevOps Engineer
-
Reports Into - Lead DevOps Engineer
-
Location - India
A Little Bit about Kwalee….
Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.
What’s In It For You?
-
Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm
-
Flexible working hours - we trust you to choose how and when you work best
-
Profit sharing scheme - we win, you win
-
Private medical cover - delivered through BUPA
-
Life Assurance - for long term peace of mind
-
On site gym - take care of yourself
-
Relocation support - available
-
Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars
-
Pitch and make your own games on https://www.kwalee.com/blog/inside-kwalee/what-are-creative-wednesdays/">Creative Wednesdays!
Are You Up To The Challenge?
As a DevOps Engineer you have a passion for automation, security and building reliable expandable systems. You develop scripts and tools to automate deployment tasks and monitor critical aspects of the operation, resolve engineering problems and incidents. Collaborate with architects and developers to help create platforms for the future.
Your Team Mates
The DevOps team works closely with game developers, front-end and back-end server developers making, updating and monitoring application stacks in the cloud.Each team member has specific responsibilities with their own projects to manage and bring their own ideas to how the projects should work. Everyone strives for the most efficient, secure and automated delivery of application code and supporting infrastructure.
What Does The Job Actually Involve?
-
Find ways to automate tasks and monitoring systems to continuously improve our systems.
-
Develop scripts and tools to make our infrastructure resilient and efficient.
-
Understand our applications and services and keep them running smoothly.
Your Hard Skills
-
Minimum 1 years of experience on a dev ops engineering role
-
Deep experience with Linux and Unix systems
-
Networking basics knowledge (named, nginx, etc)
-
Some coding experience (Python, Ruby, Perl, etc.)
-
Experience with common automation tools (Ex. Chef, Terraform, etc)
-
AWS experience is a plus
-
A creative mindset motivated by challenges and constantly striving for the best
Your Soft Skills
Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues.
We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.
A Little More About Kwalee
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.
Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.
We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France.
We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!
Do you love leading a team of engineers, coding up new products, and making sure that they work well together? If so, this is the job for you.
As an Engineering Manager in Unscript, you'll be responsible for managing a team of engineers who are focused on developing new products. You'll be able to apply your strong engineering background as well as your experience with large-scale development projects in the past.
You'll also be able to act as Product Owner (we know it's not your job but you'll have to do this :) ) and make sure that the team is working towards the right goals.
Being the Engineering Manager at Unscript means owning up to all things—from technical issues to product decisions—and being comfortable with taking responsibility for everything from hiring and training new hires, to making sure you get the best out of every individual.
About Us:
UnScript uses AI to create videos that were never shot. Our technology saves brands thousands of dollars spent in hiring influencers/actors, shooting videos with them. UnScript was founded by distinguished alums from IIT, with exemplary backgrounds in business and technology. UnScript has raised two rounds of funding from global VCs with Peter Thiel(Co-founder, Paypal) and Ried Hoffman ( Co-founder, Linkedin) as investors.
Required Qualifications:
- B.Tech or higher in Computer Science from a premier institute. (We are willing to waive this requirement if you are an exceptional programmer).
- Building scalable & performant web systems with clear focus on reusable modules.
- You are comfortable in a high paced environment and can respond to urgent (and at times ambiguous) requests
- Ability to translate fuzzy business problems into technical problems & come up with design, estimates, planning, execution & deliver the solution independently.
- Knowledge in AWS or other cloud Infra.
The Team:
Unscript was started by Ritwika Chowdhury. Our team brings experience from other foremost institutions like IIT Kharagpur, Microsoft Research, IIT Bombay, IIIT, BCG etc. We are thrilled to be backed by some of the world's largest VC firms and angel investors.
Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
Relevant Experience: 3-7 Years
Location: Bangalore
Client: IBM
Exposure to Connect Direct on multiple operating systems
Job Scheduler
Analysis, Build and troubleshooting skills
Nice to have skill:
Github (any source control)
CodeFresh (any devops tool).Exposure to Cloud
Client: Born Group
Contratual (Codersbrain Payroll)
Hybrid
Location: Bangalore
Title: Cloud and Automation Engineer
- Budget: 6 L - 3 yrs
- Budget: 10 L - 5 yrs
JD:Position Responsibilities:
- Estimate user stories/features (story point estimation) and tasks in hours with the required level of
accuracy and commit them as part of Sprint Planning.
- Contributes to the backlog grooming meetings by promptly asking relevant questions to ensure
requirements achieve the right level of DOR.
- Raise any impediments/risks (technical/operational/personal) they come across and approaches
Scrum Master/Technical Architect/PO
- Creates and maintains the product test strategy, documents it.
- Creates formal test plans and test reports and ensures they have the correct approvals.
- Coaches and mentors test team members the importance of testing.
- Responsible for test planning
- Organizes and facilitates Test Readiness Review
- Works with the product management team to create the approved user guide ready for the release.
- Provides test coverage reports/automates test cases percentage.
- Ensures high quality deliverable is passed on to UAT phase for stakeholders testing.
- Provides test evaluation summary report (test metrics) for the release.
- Estimates user stories/features (story point estimation) from their point of view and tasks in hours
with the required level of accuracy and commit them as part of Sprint Planning.
- Contributes to the backlog refinement meetings by promptly asking relevant questions to ensure
requirements achieve the right level of DOR.
- Works with the Product Owner to confirm that the acceptance tests reflect the desired functionality.
- Raises any impediments/risks (technical/operational/personal) they come across and approaches
Scrum Masters/Technical Architect/PO accordingly to arrive at a decision.
- Collaborate with other team members on various aspects of development/integration testing etc. to
get the feature working on is delivered with quality and on time.
- Tests features developed by developers throughout sprints to ensure working software with high
quality as per acceptance criteria defined is released as per committed team sprint objectives.
- Have a good understanding of the product features and customer expectations. They ensure all
aspects of the functionality is tested before the product is tested in UAT
- Plan how the features will be tested and should manage the test environments and be ready with the
test data.
- Understands requirements, they create automated test cases thereby ensuring regression testing is
performed on a daily basis. Checks in code into the shared source code repository regularly without
build errors.
- Ensure the defects are reported accurately in Azure DevOps with the relevant details of
severity/description etc and work with the developers to ensure the defects are verified and closed
before the release.
- Update the status and the remaining efforts for their tasks on a daily basis
- Ensures change requests are treated correctly and tracked in the system, impact analysis done and
risks/timelines are appropriately communicated.
Performance testing Roles and responsibilities
- Design, Implement, and Support performance testing systems and strategies
- Designing workload models
- Executing performance tests
- Using consistent metrics for monitoring
- Identifying bottlenecks, and where they occur
- Interpret results and graphs.
- Understand describe the relationship between queues and sub-systems
- Identify suggestions for performance tuning
- Preparing the test report
Employer will not sponsor applicants for employment visa status.
Basic Qualifications (Required Skills/Experience):
- Technical Bachelor's degree or Master degree
- Azure Cloud fundamentals.
- Programming languages: Java
- Development methodologies: Agile(BDD with Junit and Code repositories in Git – with feature
branch based development CI/CD)
- Architectural Paradigms: Microservices
- Multi-deployment model support: Container based approach with Docker
- Relevant work experience into ETL\BI testing.
- Write ANSI SQLs to compare data
- Knowledge on SQLServer Database Analysis service
- Knowledge on Azure Data lake
- Data comparison, Data Verification and Validation using Excel
- Relevant work experience into manual testing.
- Technical background and an understanding of the Aviation industry
- Should have worked with test management tools - like ALM(application lifecycle management) or
other equivalent test management tools
- Good documenting/scripting knowledge
- Excellent verbal and written communication skills
- Good understanding of SDLC and STLC
- Proven ability to manage and prioritize multiple, diverse projects simultaneously
- Must be flexible, independent , self-motivated , Punctual, Regular and consistent attendance
- Automation Testing tools like Selenium (or) QTP (Quick Test Professional) and experience on HP
ALM or Devops, , ETL Testing, SQL
- Experience in Load Testing, Stress Testing, Stability testing ,Reliability testing
- Hands on Performance testing tools like HP Performance Tester (LoadRunner), WebLOAD
- Worked minimum 5 years in test automation
Preferred Qualifications (Desired Skills/Experience):
- BE/B.Tech/M.E/M.Tech/M.Sc/MCA degree in IT/CSE/ECE with 6 to 8 years of relevant IT Software
Testing experience
- This Position would require person to work in Flight Domain, below experience would be preferred
- Past experience related to Aeronautical data / Aerospace / Aviation domain.
- Past experience related to Aircraft Performance computation/optimization, Tail Specific
Performance computations using big data analytics, ML and modeling.
- Past experience related to EFB applications, Flight planning, Data link, Flight Management Computer
and Airline Operations.
- Good understanding of weather, air traffic constraints, ACARS, NOTAM’s, Routes, Flight profile,
Flight Progress and demonstrated ability lead technology project and team management in one or
more technology areas.
- Knowledge of aviation industry is preferred.
Typical Education & Experience:
Education/experience typically acquired through advanced education (e.g. Bachelor) and typically 5 or
more years' related work experience or an equivalent combination of education and experience (e.g.
Master+ 4 years' related work experience.
Relocation:
This position does offer relocation within INDIA.
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
full service & Product engineering USA based company
Skills
at urbanpiper
smallest of restaurants to the largest chains across the world to grow their in-store
and online commerce. Right from automating all their workflows across online delivery
platforms such as Swiggy, Zomato, Deliveroo, UberEats—to building and deploying
self-branded websites and apps, right to managing their walk-in customers and
dine-in experiences, UrbanPiper is the preferred choice for over 20k+ restaurants.
We are backed by top VCs — Tiger Global and Sequoia Capital. Recently, we closed
our Series B round of funding with Swiggy and Zomato also participating together!
That’s a first and only (until now) event for any startup in the food and beverage
industry.
The team:
The Platform Team is responsible for the core order processing and workflow
automation products that UrbanPiper builds. The team owns a suite of services, a
customer facing application and data pipelines that enable our customers to take
orders from multiple online and offline channels, process them seamlessly, and track
them till completion.
Different internal and external applications and systems depend on services managed
by the team to deliver their functionality to end-users.
Your role:
As a Lead Software Engineer on the team, you will be responsible for the design,
development, and maintenance of functional components in our core order processing
and workflow automation products. You will be working with a team of backend and
frontend engineers to build new features and improve existing ones.
You will:
● Take technical responsibility for a part of the product/module all throughout the
SDLC ranging from design to implementation and operation.
● Design, build, and maintain efficient, reusable, and reliable Python code while
maintaining strict scalability requirements.
● Write unit tests and integration tests and ensure high quality code delivery.
● Write high quality documentation explaining the architecture and
implementation of the components you work on.
● Identify bottlenecks and bugs with the help of our error management/APM
solution, and devise fixes to these problems.
● Assist the SRE/Devops team in setting up the production environment for new
modules/systems as required.
● Participate in on-call shift rotations to assist the on-call SRE in identifying and
resolving product issues.
● Review code written by other team members.
● Mentor and guide Associate and Software Engineer level team members.
We are looking for someone who has/is:
● 4-7 years of experience in Python Web backend development.
● Ability to communicate clearly both verbally and in writing.
● Strong familiarity with frameworks like Django, Flask etc. and those required to
implement RESTful/GraphQL backends.
● Proficiency in SQL/NoSQL data modeling.
● Proficiency and experience designing and implementing clean and flexible REST
API interfaces.
● Ability to re-architect existing systems to become more efficient and scalable
based on industry best practices.
● Experience working with message queues such as RabbitMQ/Kafka or similar
queuing based systems.
● Experience with cloud services (AWS, Google Cloud Platform).
● Experience with CI/CD tools (Jenkins, CircleCI etc.).
Good to have:
● Experience in a high-growth technology startup company.
● Experience managing a technical team.
● Familiarity with the concepts of distributed systems, their various failure modes
and solutions to address them.
● Experience with column-oriented analytical databases such as ClickHouse,
Redshift etc.
Rapidly growing fintech SaaS firm that propels business grow
What is the role?
As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.
Key Responsibilities
- Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Work on Docker images and maintain Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Work on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
- Have the necessary technical and professional expertise.
What are we looking for?
- Minimum 5-12 years of experience in the IT industry.
- Expertise in implementing and managing DevOps CI/CD pipeline.
- Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
- Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
- Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience with Jira is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.
We are
It is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
Cloudera Data Warehouse Hive team looking for a passionate senior developer to join our growing engineering team. This group is targeting the biggest enterprises wanting to utilize Cloudera’s services in a private and public cloud environment. Our product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning opportunities.A Day in the LifeOver the past 10+ years, Cloudera has experienced tremendous growth making us the leading contributor to Big Data platforms and ecosystems and a leading provider for enterprise solutions based on Apache Hadoop. You will work with some of the best engineers in the industry who are tackling challenges that will continue to shape the Big Data revolution. We foster an engaging, supportive, and productive work environment where you can do your best work. The team culture values engineering excellence, technical depth, grassroots innovation, teamwork, and collaboration.
You will manage product development for our CDP components, develop engineering tools and scalable services to enable efficient development, testing, and release operations. You will be immersed in many exciting, cutting-edge technologies and projects, including collaboration with developers, testers, product, field engineers, and our external partners, both software and hardware vendors.Opportunity:Cloudera is a leader in the fast-growing big data platforms market. This is a rare chance to make a name for yourself in the industry and in the Open Source world. The candidate will responsible for Apache Hive and CDW projects. We are looking for a candidate who would like to work on these projects upstream and downstream. If you are curious about the project and code quality you can check the project and the code at the following link. You can start the development before you join. This is one of the beauties of the OSS world.Apache Hive
Responsibilities:
•Build robust and scalable data infrastructure software
•Design and create services and system architecture for your projects
•Improve code quality through writing unit tests, automation, and code reviews
•The candidate would write Java code and/or build several services in the Cloudera Data Warehouse.
•Worked with a team of engineers who reviewed each other's code/designs and held each other to an extremely high bar for the quality of code/designs
•The candidate has to understand the basics of Kubernetes.
•Build out the production and test infrastructure.
•Develop automation frameworks to reproduce issues and prevent regressions.
•Work closely with other developers providing services to our system.
•Help to analyze and to understand how customers use the product and improve it where necessary.
Qualifications:
•Deep familiarity with Java programming language.
•Hands-on experience with distributed systems.
•Knowledge of database concepts, RDBMS internals.
•Knowledge of the Hadoop stack, containers, or Kubernetes is a strong plus.
•Has experience working in a distributed team.
•Has 3+ years of experience in software development.
- Produce clean code and automated tests
- Align with enterprise architecture frameworks and standards
- Be the role-model for all engineers in the team in terms of technical competency
- Research, assess and adopt new technologies as required
- Be a guide and mentor to the team members and help in ramping up the overall skill-base of the team.
- Produce detailed estimates and optimized work plans for requirements and changes
- Ensure that features are delivered on time and that they meet the business needs
- Strive for quality of performance, usability, reliability, maintainability, and extensibility
- Identify opportunities for process and tool improvements
- Use analytical rigor to produce effective solutions to poorly defined problems
- Follow Build to Ship mantra in practice with full Dev Ops implementation
- 10+ years of core software development and product creation experience in CPaaS.
- Working knowledge in VoIP, communication API , J2EE, JMS/ Kafka, Web-Services, Hadoop, React, Node.js, GoLang.
- Working knowledge in Various CPaaS channels - SMS, voice, WhatsApp, RCS, Email.
- Working knowledge of DevOps, automation testing, test driven development, behavior driven development, server-less or micro-services
- Experience with AWS / Azure deployments
- Solid background in large scale software development.
- Full stack understanding of web/mobile/API/database development concepts and patterns
- Exposure to Microservices, Iaas, PaaS, service mesh, SaaS and cloud native application development.
- Understanding of Agile Scrum and SDLC principles.
- Containerization and orchestrations:- Dockers, kuberenetes, openshift, consule etc.
- Knowledge on NFV (openstack, Vsphere, Vcloud etc)
- Experience in Data Analytics/AI/ML or Marketing Tech domain is an added advantage
What the role needs
● Review of current DevOps infrastructure & redefine code merging strategy as per product roll out objectives
● Define deploy frequency strategy based on product roadmap document and ongoing product market fit relate tweaks and changes
● Architect benchmark docker configurations based on planned stack
● Establish uniformity of environment across developer machine to multiple production environments
● Plan & execute test automation infrastructure
● Setup automated stress testing environment
● Plan and execute logging & stack trace tools
● Review DevOps orchestration tools & choices
● Coordination with external data centers and AWS in the event of provisioning, outages or maintenance.
Requirements
● Extensive experience with AWS cloud infrastructure deployment and monitoring
● Advanced knowledge of programming languages such as Python and golang, and writing code and scripts
● Experience with Infrastructure as code & devops management tools - Terraform, Packer for devops asset management for monitoring, infrastructure cost estimations, and Infrastructure version management
● Configure and manage data sources like MySQL, MongoDB, Elasticsearch, Redis, Cassandra, Hadoop, etc
● Experience with network, infrastructure and OWASP security standards
● Experience with web server configurations - Nginx, HAProxy, SSL configurations with AWS, understanding & management of sub-domain based product rollout for clients .
● Experience with deployment and monitoring of event streaming & distributing technologies and tools - Kafka, RabbitMQ, NATS.io, socket.io
● Understanding & experience of Disaster Recovery Plan execution
● Working with other senior team members to devise and execute strategies for data backup and storage
● Be aware of current CVEs, potential attack vectors, and vulnerabilities, and apply patches as soon as possible
● Handle incident responses, troubleshooting and fixes for various services
Skills – Jboss, DevOps, ServiceNow, Windows Server.
JD - Application Maintenance -- Must have -- Installation and configuration of Custom/Standard Software e.g., FileZilla, JDK, OpenJDK Installation and configuration of JBOSS/Tomcat Server, Configuration of HTTPS certificate in JBOSS/Tomcat, Windows Event Viewer/IIS Log/ Windows Security /Active Directory, how to set Environment variable, Registry value, etc. Nice to Have --- Basics of Monitoring Knowledge of PowerShell, MS Azure Devops, Deploy and Configure application, how to check Last installed version of any software/patch, ServiceNow , ITIL , Incident Management , Change Management
TECHNICAL Project Manager
GormalOne LLP. Mumbai IN
GormalOne is on a mission to make dairy farming highly profitable, especially for the smallest farmers living in the most neglected geographies. We are a dairy-focused technology solution provider with a vision to resolve the pain points of everyone in the dairy ecosystem. We are building a comprehensive platform for cattle management where farmers to AITs (Artificial Insemination technicians), para vets, veterinarians, consultants, and corporates can collaborate and benefit each other using data. Nitara offers an easy-to-use artificial intelligence-enabled herd management system for farmers/veterinary/paraprofessionals/AITs.
We are looking for a Project manager who will be instrumental in the planning and management of both IT and IT-related projects. As a Technical Project Manager, you are required to have a high level of technical expertise as well as organization, leadership, and communication skills.
Responsibilities
- Responsible for project planning and delivery. Ensure scope for all the projects are delivered to meet the business need as per the agreed schedule, scope, quality, and cost. Lead project planning effort in the creation of project plans, tasks, and schedules
- Conduct high level scoping, impact and risk assessment, schedule, and resource requirements for the successful delivery of the projects to the agreed scope.
- Organize and facilitate regular project scrum to track project tasks and dependency.
- Identify, assess, track and mitigate issues and risks at multiple levels
- Work closely with various teams internally & the user side to ensure delivery of solutions and deployment as per requirement.
- Drive effective teamwork, communication, collaboration, overcome obstacles, resolve conflicts, and commitment across multiple disparate groups with competing priorities.
- Responsible for the overall health of the projects by monitoring schedule, budget, milestones, and benefits attainment, project risk control, and project governance.
- Ensure that realistic project plans are prepared and maintained & all activities are tracked against the plan, providing regular and accurate reports to Management.
- Responsible for change control, gaining agreement for CRs to the projects from the product owner and other relevant stakeholders.
- Provide weekly / regular project updates to the Program Management Office.
- Work towards continual improvement by implementing corrective action plans based on performance analysis, lessons learned, project archives, etc.
- Hiring, training, and developing new employees to meet organizational needs
- Reviewing employee performance evaluations and providing feedback on areas for improvement.
Certifications Required
- Project Management Professional (PMP) or Prince 2 foundation/practitioner.
- Certified Scrum Master (CSM)
Skills and Capabilities
Essentials
- Min 3-5 years of technical project management experience with overall 5-10 years of IT services experience.
- Excellent project management skills to develop a business case and deliver solutions to the user with oversight of technical solution implementation.
- Proven experience in managing projects using agile methodologies such as agile scrum. Good knowledge of tools like Jira, Azure DevOps, MS Projects, Confluence, etc.
- Experience in managing projects in a variety of technologies (e.g. Java, .Net, etc.)
- A high degree of technical competency in software development practices, project management, and the ability to bridge the gap between technology and business needs.
- Strong analytical and quantitative skills in order to gather insights from data to identify underlying issues
- Ability to thrive in a fast-paced, Scrum/Agile development environment with the ability to multi-task and respond flexibly to change
- Deal well with ambiguity and changing deadlines while keeping the focus on delivering results.
- Strong and effective project leader who can prioritize well and communicate effectively.
- Comfortable in getting hands dirty in day-to-day tasks to get things done
- Experienced self-starter with a problem-solving focus, an analytical mindset, and extreme attention to detail.
- Experience in handling different phases of the project's life cycle from requirement gathering until steady-state handover.
- Proficient in written and spoken English in a business setting.
Soft Skills required:
- Strong analytical skills
- Strong written documentation skills
- Strong presentation skills
- Good communication skills and able to communicate at all levels
- Ability to work well in a diverse environment and proven ability to influence others to achieve positive outcomes
Kindly note: Salary shall be commensurate with qualifications and experience
Visit us at - https://gormalone.com/">https://gormalone.com/ & https://www.nitara.co.in/">https://www.nitara.co.in/
Solution Oriented Mindset
- Assist project in all technical aspects of tooling and DevOps
- Proactively lead tools new versions release cycle and documentation
- Proactively identify risks related to application /deliverables and propose a mitigation plan
- Provide custom solutions as per customer requirements
Autonomy & Problem Solving Mindset
- Work in complete autonomy to deliver project deliverables, for advanced technical deliverables, with required level of quality
- Must have troubleshooting skills
Agile Mindset
- Contribute to improvement of internal process, tooling, and quality process
- Design, build and collect technical materials as part of project executions in a spirit of reusability for future engagements and maintain knowledge on best practices, tools, and reusable components for CAST analysis
We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.
We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.
Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!
Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc.
Key Responsibilities
-
Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality
-
Work with the Office of the CTO as an active member of our architecture guild
-
Writing pipelines to consume the data from multiple sources
-
Writing a data transformation layer using DBT to transform millions of data into data warehouses.
-
Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities
-
Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
What To Bring
-
5+ years of software development experience, a startup experience is a plus.
-
Past experience of working with Airflow and DBT is preferred
-
5+ years of experience working in any backend programming language.
-
Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL
-
Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
-
Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects
-
Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure
-
Basic understanding of Kubernetes & docker is a must.
-
Experience in data processing (ETL, ELT) and/or cloud-based platforms
-
Working proficiency and communication skills in verbal and written English.
SUVI(provide upto 18LPA)
• Strong proficiency with Java programming & DevOps
• -Must have experience on Micro-Services using Spring boot/Jersy/Swagger.
• Must havegood knowledge on Dockers.
• Must have at least 1to 2 years’ experience of Web application
• Knowledge of OOP concepts, industry best practices and design
• Well versed Back-end build pipelines and tools-Professional, precise communication
skills
• Work experience writing Unit Tests
• Work experience on applying the best practices ofWeb Applicationdevelopment
• Working experience in an Agile team especially with SCRUM
• Good understandingof DevOps, CI/CD principles and practices
toimprove Software Quality & Efficienc
• Good understanding of web technology/enterprise level applications
• Good to have experience in in JavaScript frameworks
• Good to have experience in Agile Methodology
• Good to have previously worked on distributed systems
• Good to have working knowledge on Kafka and Redis
• Good to have exposure to stream processing and functional
programming
CANDIDATES MUST HAVE
• JAVA 8 or above
• DevOps
• 1+years of Web Development experience
• Javascript Framework
• Hibernate &Mircroservices
- Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
- Experience in Programming Language: Java 8, Javascript
- Experience in Microservice Development or Architecture
- Experience with Web Application Frameworks: Spring or Springboot or Micronaut
- Designing: High Level/Low-Level Design
- Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
- Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
- Experience on one or more Database: RDBMS or NoSQL
- Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
- Security (Authentication, scalability, performance monitoring)
About the compnay:
Our Client is a B2B2C tech Web3 startup founded by founders - IITB Graduates who are experienced in retail, ecommerce and fintech.
Vision: Our Client aims to change the way that customers, creators, and retail investors interact and transact at brands of all shapes and sizes. Essentially, becoming the Web3 version of brands driven social ecommerce & investment platform.
Role Description
We are looking for a DevOps Engineer responsible for managing cloud technologies, deployment
automation and CI /CD
Key Responsibilities
Building and setting up new development tools and infrastructure
Understanding the needs of stakeholders and conveying this to developers
Working on ways to automate and improve development and release processes
Testing and examining code written by others and analyzing results
Ensuring that systems are safe and secure against cybersecurity threats
Identifying technical problems and developing software updates and ‘fixes’
Working with software developers and software engineers to ensure that development
follows established processes and works as intended
Planning out projects and being involved in project management decisions
Required Skills and Qualifications
BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
4+ years of overall development experience.
Strong understanding of cloud deployment and setup.
Hands-on experience with tools like Jenkins, Gradle etc.
Deploy updates and fixes.
Provide Level 2 technical support.
Build tools to reduce occurrences of errors and improve customer experience.
Perform root cause analysis for production errors.
Investigate and resolve technical issues.
Develop scripts to automate deployment.
Design procedures for system troubleshooting and maintenance.
Proficient with git and git workflows.
Working knowledge of databases and SQL.
Problem-solving attitude.
Collaborative team spirit
Regards
Team Merito
About us:
GaragePlug is an all-in-one cloud platform that redefines a customer’s journey with automotive service businesses. GaragePlug harnesses the power of digitalization to help automotive service businesses achieve immense operational efficiency and build a highly impressionable customer experience that is sure to win any customer. GaragePlug aims to bring technological disruption to the automotive after-sales service & repair industry by taking the industry one step closer to the future. Currently, GaragePlug is trusted by hundreds of brands across 15+ countries and continues to expand across the world!
Experience:
At least 10 years of experience.
Consultant Role Description:
- Tech Lead with strong Expertise in Core Java & Spring boot.
- Hands-on in developing Microservices, Eureka service registry, and Docker, make use of OAuth for API Auth, Kubernetes, and Grafana.
- Knowledge of Cloud Technologies, AWS, CI/CD, Jenkins, and Testing methodologies is preferred.
- Hands-on to do API Documentation. Hands-on to invoke 3rd party REST / SOAP API invocations through REST template.
- Ability to design and architect solutions.
- Direct the integration of technical and engineering activities within projects.
- Knowledge of DevOps practices and tools.
- Ability to formulate and deliver solutions to complex problems in a large and diverse technology landscape with multiple teams.
- Recruit, coach, and mentor the best engineering talent
Preferable Location(s): Bengaluru, India Work Type: Part Time
Apply through this link
https://garageplug.freshteam.com/jobs/lUnYthmY9U84/consultant-architect" target="_blank">https://garageplug.freshteam.com/jobs/lUnYthmY9U84/consultant-architect
Top Management Consulting Company
We are looking out for a technically driven "ML OPS Engineer" for one of our premium client
COMPANY DESCRIPTION:
Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
JOB DESCRIPTION
- Lead of IT team must guide & manage dev-ops, cloud system administrators, desktop support analysts and also assist in procure & manage assets.
- Design and develop a scalable IT infrastructure that benefits the organization.
- Take part in IT strategic planning activities that reflect the future vision of the organization.
- Introduce cost-effective best practices related to the needs of the business needs of the organization.
- Research and recommend solutions that circumvent potential technical issues.
- Provide high levels of customer service as it pertains to enterprise infrastructure.
- Review and document key performance metrics and indicators to ensure high performance of IT service delivery systems.
- Take charge of available client databases, networks, storage, servers, directories, and other technology services.
- Collaborate with the network engineer to design infrastructure improvements and changes and to troubleshoot any issues that arise.
- Plan, design, and manage infrastructure technologies that can support complex and heterogeneous corporate data and voice infrastructure.
- Execute, test and roll out innovative solutions to keep up with the growing competition technologies that can support complex and heterogeneous corporate data and voice infrastructure.
- Create and document proper installation and configuration procedures.
- Assist in handling software distributions and software updates and patches.
- Oversee deployment of systems and network integration in association with partner clients, business partners, suppliers and subsidiaries.
- Create, update, and manage IT policies.
- Manage, & drive assigned vendors. Perform cost benefit analysis and provide recommendations to management
KEY Proficiencies
* Bachelor’s or Master’s degree in computer science, information technology, electronics, telecommunications or any related field.
* Minimum 10 years of experience in the above mentioned fields.
We are looking for Principal Engineers, who are strong individual contributors with
expertise and passion in solving difficult problems in many areas.
Your day at nference,
• Acting as an entrepreneur - taking ownership the problem statement end-to-end
• Delivering direct value to the customer - and not just stop with delivery
• Estimate, plan, divide and conquer customer problem statements - through sturdily
developed & performant technical solutions
• Handle multiple competing priorities and ambiguity - all in a fast-paced, high growth
environment Qualities Which We Look For In The Ideal Candidate
• 6-8 years of experience in building High Performance Distributed Systems
• Proven track record in building backend systems from scratch
• Excellent coding skills (preferably any two of C/C++/Python and Go)
• Good depth in Algorithms & Data Structures
• Good understanding of OS level concepts
• Experience working on DevOps tools for deployment, monitoring etc. like Ansible, ELK
Prometheus etc
• Wide knowledge of different Technologies like Databases, Messaging Systems etc
• Experience building complex technical solutions - highly scalable service-oriented
architectures, distributed cloud-based systems - which power our products
Benefits:
• Be a part of “Google of biomedicine” as recognized by the Washington Post
• Work with some of the brilliant minds of the world solving exciting real-world
problems through Artificial Intelligence, Machine Learning, analytics and insights
through triangulating unstructured and structured information from the biomedical
literature as well as from large-scale molecular and real-world datasets.
• Our benefits package includes the best of what leading organizations provide, such as
stock options, paid time off, healthcare insurance, gym/broadband reimbursement