36+ Google Cloud Platform (GCP) Jobs in Delhi, NCR and Gurgaon | Google Cloud Platform (GCP) Job openings in Delhi, NCR and Gurgaon
Apply to 36+ Google Cloud Platform (GCP) Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration
for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We are dedicated to providing equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
What we are looking for: Backend Lead Engineer
As a Backend Lead Engineer, you will play a pivotal role in driving the technical vision and execution of our product development team. You will lead a team of talented engineers, mentor their growth, and ensure the delivery of high-quality, scalable, and maintainable software solutions.
Responsibilities:
• Technical Leadership:
o Provide technical leadership and guidance to a team backend engineers.
o Mentor and develop engineers to enhance their skills and capabilities.
o Collaborate with product managers, designers, and other stakeholders to define
product requirements and technical solutions.
• Development:
o Design, develop, and maintain robust and scalable backend applications.
o Optimize application performance and scalability for large-scale deployment
o Write clean, efficient, and well-tested code that adheres to best practices.
o Stay up-to-date with the latest technologies and trends in web development.
• Project Management:
o Lead and manage software development projects from inception to deployment.
o Estimate project timelines, assign tasks, and track progress.
o Ensure timely delivery of high-quality software.
• Problem-Solving:
o Identify and troubleshoot technical issues.
o Develop innovative solutions to complex problems.
• Architecture Design:
o Design and implement scalable and maintainable software architectures.
o Ensure the security, performance, and reliability of our systems.
Qualifications:
• Bachelor's degree in Computer Science, Engineering, or a related field.
• 6+ years of experience in backend software development.
• Proven experience leading and mentoring engineering teams.
• Strong proficiency in backend technologies (Python, Node.js, Java).
• Experience with GCP and deployment at scale.
• Familiarity with database technologies (SQL, NoSQL).
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration skills.
Preferred Qualifications:
• Experience with agile development methodologies (Scrum, Kanban).
• Knowledge of DevOps practices and tools.
• Experience with microservices architecture.
• Contributions to open-source projects.
• Experience in any oil
About the Company-
AdPushup is an award-winning ad revenue optimization platform and Google Certified Publishing Partner (GCPP), helping hundreds of web publishers grow their revenue using cutting-edge technology, premium demand partnerships, and proven ad ops expertise.
Our team is a mix of engineers, marketers, product evangelists, and customer success specialists, united by a common goal of helping publishers succeed. We have a work culture that values expertise, ownership, and a collaborative spirit.
Job Overview- Java Backend- Lead Role :-
We are seeking a highly skilled and motivated Software Engineering Team Lead to join our dynamic team. The ideal candidate will have a strong technical background, proven leadership experience, and a passion for mentoring and developing a team of talented engineers. This role will be pivotal in driving the successful delivery of high-quality software solutions and fostering a collaborative and innovative work environment.
Exp- 5+ years
Location- New Delhi
Work Mode- Hybrid
Key Responsibilities:-
● Leadership and Mentorship: Lead, mentor, and develop a team of software engineers, fostering an environment of continuous improvement and professional growth.
● Project Management: Oversee the planning, execution, and delivery of software projects, ensuring they meet quality standards, timelines, and budget constraints.
● Technical Expertise: Provide technical guidance and expertise in software design, architecture, development, and best practices. Stay updated with the latest industry trends and technologies. Design, develop, and maintain high-quality applications, taking full, end-to-end ownership, including writing test cases, setting up monitoring, etc.
● Collaboration: Work closely with cross-functional teams to define project requirements, scope, and deliverables.
● Code Review and Quality Assurance: Conduct code reviews to ensure adherence to coding standards, best practices, and overall software quality. Implement and enforce quality assurance processes.
● Problem Solving: Identify, troubleshoot, and resolve technical challenges and bottlenecks. Provide innovative solutions to complex problems.
● Performance Management: Set clear performance expectations, provide regular feedback, and conduct performance evaluations for team members.
● Documentation: Ensure comprehensive documentation of code, processes, and project-related information.
Qualifications:-
● Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
● Experience: Minimum of 5 years of experience in software development.
● Technical Skills:
○ A strong body of prior backend work, successfully delivered in production. Experience building large volume data processing pipelines will be an added bonus.
○ Expertise in Core Java.
■ In-depth knowledge of the Java concurrency framework.
■ Sound knowledge of concepts like exception handling, garbage collection, and generics.
■ Experience in writing unit test cases, using any framework.
■ Hands-on experience with lambdas and streams.
■ Experience in using build tools like Maven and Ant.
○ Good understanding and Hands on experience of any Java frameworks e.g. SpringBoot, Vert.x will be an added advantage.
○ Good understanding of security best practices. ○ Hands-on experience with Low Level and High Level Design Practices and Patterns.
○ Hands on experience with any of the cloud platforms such as AWS, Azure, and Google Cloud.
○ Familiarity with containerization and orchestration tools like Docker, Kubernetes and Terraform.
○ Strong understanding of database technologies, both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Couchbase).
○ Knowledge of DevOps practices and tools such as Jenkins, CI/CD.
○ Strong understanding of software development methodologies (e.g., Agile, Scrum).
● Leadership Skills: Proven ability to lead, mentor, and inspire a team of engineers. Excellent interpersonal and communication skills.
● Problem-Solving Skills: Strong analytical and problem-solving abilities. Ability to think critically and provide innovative solutions.
● Project Management: Experience in managing software projects from conception to delivery. Strong organizational and time-management skills.
● Collaboration: Ability to work effectively in a cross-functional team environment. Strong collaboration and stakeholder management skills.
● Adaptability: Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements.
Why Should You Work for AdPushup?
At AdPushup, we have
1. A culture of valuing our employees and promoting an autonomous, transparent, and ethical work environment.
2. Talented and supportive peers who value your contributions.
3. Challenging opportunities: learning happens outside the comfort-zone and that’s where our team likes to be - always pushing the boundaries and growing personally and professionally.
4. Flexibility to work from home: We believe in work & performance instead of measuring conventional benchmarks like work-hours.
5. Plenty of snacks and catered lunch.
6. Transparency: an open, honest and direct communication with co-workers and business associates.
Job Description
We are seeking a talented DevOps Engineer to join our dynamic team. The ideal candidate will have a passion for building and maintaining cloud infrastructure while ensuring the reliability and efficiency of our applications. You will be responsible for deploying and maintaining cloud environments, enhancing CI/CD pipelines, and ensuring optimal performance through proactive monitoring and troubleshooting.
Roles and Responsibilities:
- Cloud Infrastructure: Deploy and maintain cloud infrastructure on Microsoft Azure or AWS, ensuring scalability and reliability.
- CI/CD Pipeline Enhancement: Continuously improve CI/CD pipelines and build robust development and production environments.
- Application Deployment: Manage application deployments, ensuring high reliability and minimal downtime.
- Monitoring: Monitor infrastructure health and perform application log analysis to identify and resolve issues proactively.
- Incident Management: Troubleshoot and debug incidents, collaborating closely with development teams to implement effective solutions.
- Infrastructure as Code: Enhance Ansible roles and Terraform modules, maintaining best practices for Infrastructure as Code (IaC).
- Tool Development: Write tools and utilities to streamline and improve infrastructure operations.
- SDLC Practices: Establish and uphold industry-standard Software Development Life Cycle (SDLC) practices with a strong focus on quality.
- On-call Support: Be available 24/7 for on-call incident management for production environments.
Requirements:
- Cloud Experience: Hands-on experience deploying and provisioning virtual machines on Microsoft Azure or Amazon AWS.
- Linux Administration: Proficient with Linux systems and basic system administration tasks.
- Networking Knowledge: Working knowledge of network fundamentals (Ethernet, TCP/IP, WAF, DNS, etc.).
- Scripting Skills: Proficient in BASH and at least one high-level scripting language (Python, Ruby, Perl).
- Tools Proficiency: Familiarity with tools such as Git, Nagios, Snort, and OpenVPN.
- Containerization: Strong experience with Docker and Kubernetes is mandatory.
- Communication Skills: Excellent interpersonal communication skills, with the ability to engage with peers, customers, vendors, and partners across all levels of the organization.
Job Title: Javascript Developers (Full-Stack Web)
On-site Location: NCTE, Dwarka, Delhi
Job Type: Full-Time
Company: Bharattech AI Pvt Ltd
Eligibility:
- 6 years of experience (minimum)
- B.E/B.Tech/M.E/M.Tech -or- MCA -or- M.Sc(IT or CS) -or- MS in Software Systems
About the Company:
Bharattech AI Pvt Ltd is a leader in providing innovative AI and data analytics solutions. We have partnered with the National Council for Teacher Education (NCTE), Delhi, to implement and develop their data analytics & MIS development lab, called VSK. We are looking for skilled Javascript Developers (Full-Stack Web) to join our team and contribute to this prestigious project.
Job Description:
Bharattech AI Pvt Ltd is seeking two Javascript Developers (Full-Stack Web) to join our team for an exciting project with NCTE, Delhi. As a Full-Stack Developer, you will play a crucial role in the development and integration of the VSK Web application and related systems.
Work Experience:
- Minimum 6 years' experience in Web apps, PWAs, Dashboards, or Website Development.
- Proven experience in the complete lifecycle of web application development.
- Demonstrated experience as a full-stack developer.
- Knowledge of either MERN, MEVN, or MEAN stack.
- Knowledge of popular frameworks (Express/Meteor/React/Vue/Angular etc.) for any of the stacks mentioned above.
Role and Responsibilities:
- Study the readily available client datasets and leverage them to run the VSK smoothly.
- Communicate with the Team Lead and Project Manager to capture software requirements.
- Develop high-level system design diagrams for program design, coding, testing, debugging, and documentation.
- Develop, update, and modify the VSK Web application/Web portal.
- Integrate existing software/applications with VSK using readily available APIs.
Skills and Competencies:
- Proficiency in full-stack development, including both front-end and back-end technologies.
- Strong knowledge of web application frameworks and development tools.
- Experience with API integration and software development best practices.
- Excellent problem-solving skills and attention to detail.
- Strong communication skills and the ability to work effectively in a team environment.
Why Join Us:
- Be a part of a cutting-edge project with a significant impact on the education sector.
- Work in a dynamic and collaborative environment with opportunities for professional growth.
- Competitive salary and benefits package.
Join Bharattech AI Pvt Ltd and contribute to transforming technological development at NCTE, Delhi!
Bharattech AI Pvt Ltd is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
MUST HAVES:
- #java11, Java 17 & above only
- #springboot #microservices experience is must
- #cloud experience is must (AWS or GCP or Azure)
- Strong understanding of #functionalprogramming and #reactiveprogramming concepts.
- Experience with asynchronous programming and async frameworks/libraries.
- Proficiency in #sql databases (MySQL, PostgreSQL, etc.).
- WFO in NOIDA only.
Other requirements:
- Knowledge of socket programming and real-time communication protocols.
- Experience of building complex enterprise grade applications with multiple components and integrations
- Good coding practices and ability to design solutions
- Good communication skills
- Ability to mentor team and give technical guidance
- #fullstack skills with anyone of #javascript or #reactjs or #angularjs is preferable.
- Excellent problem-solving skills and attention to detail.
- Preferred experience with #nosql databases (MongoDB, Cassandra, Redis, etc.).
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
A LEADING US BASED MNC
Data Engineering : Senior Engineer / Manager
As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.
Must Have skills :
1. GCP
2. Spark streaming : Live data streaming experience is desired.
3. Any 1 coding language: Java/Pyhton /Scala
Skills & Experience :
- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies
- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
- Strong experience in at least of the programming language Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.
- Well-versed and working knowledge with data platform related services on GCP
- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position
Your Impact :
- Data Ingestion, Integration and Transformation
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
- Build functionality for data analytics, search and aggregation
Roles and Responsibilities
• Ability to create solution prototype and conduct proof of concept of new tools.
• Work in research and understanding of new tools and areas.
• Clearly articulate pros and cons of various technologies/platforms and perform
detailed analysis of business problems and technical environments to derive a
solution.
• Optimisation of the application for maximum speed and scalability.
• Work on feature development and bug fixing.
Technical skills
• Must have knowledge of the networking in Linux, and basics of computer networks in
general.
• Must have intermediate/advanced knowledge of one programming language,
preferably Python.
• Must have experience of writing shell scripts and configuration files.
• Should be proficient in bash.
• Should have excellent Linux administration capabilities.
• Working experience of SCM. Git is preferred.
• Knowledge of build and CI-CD tools, like Jenkins, Bamboo etc is a plus.
• Understanding of Architecture of OpenStack/Kubernetes is a plus.
• Code contributed to OpenStack/Kubernetes community will be a plus.
• Data Center network troubleshooting will be a plus.
• Understanding of NFV and SDN domain will be a plus.
Soft skills
• Excellent verbal and written communications skills.
• Highly driven, positive attitude, team player, self-learning, self motivating and flexibility
• Strong customer focus - Decent Networking and relationship management
• Flair for creativity and innovation
• Strategic thinking This is an individual contributor role and will need client interaction on technical side.
Must have Skill - Linux, Networking, Python, Cloud
Additional Skills-OpenStack, Kubernetes, Shell, Java, Development
🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.
Golang Developer
Location: Chennai/ Hyderabad/Pune/Noida/Bangalore
Experience: 4+ years
Notice Period: Immediate/ 15 days
Job Description:
- Must have at least 3 years of experience working with Golang.
- Strong Cloud experience is required for day-to-day work.
- Experience with the Go programming language is necessary.
- Good communication skills are a plus.
- Skills- Aws, Gcp, Azure, Golang
Description
Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.
What You'll Do
Troubleshooting and analyzing technical issues raised by internal and external users.
Working with Monitoring tools like Prometheus / Nagios / Zabbix.
Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.
Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.
Working closely with the cross-functional teams to resolve issues.
Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.
What you should have
Bachelor’s or Master’s degree in computer science or any related field.
3 - 6 years of experience in Linux / Unix, cloud computing techniques.
Familiar with working on cloud and datacenter for enterprise customers.
Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.
Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.
Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.
Must know how to choose the best tools and technologies which best fit the business needs.
Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.
Excellent organizational skills to adapt to a constantly changing technical environment
The candidate must have 2-3 years of experience in the domain. The responsibilities include:
● Deploying system on Linux-based environment using Docker
● Manage & maintain the production environment
● Deploy updates and fixes
● Provide Level 1 technical support
● Build tools to reduce occurrences of errors and improve customer experience
● Develop software to integrate with internal back-end systems
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
● Develop scripts to automate visualization
● Design procedures for system troubleshooting and maintenance
● Experience working on Linux-based infrastructure
● Excellent understanding of MERN Stack, Docker & Nginx (Good to have Node Js)
● Configuration and managing databases such as Mongo
● Excellent troubleshooting
● Experience of working with AWS/Azure/GCP
● Working knowledge of various tools, open-source technologies, and cloud services
● Awareness of critical concepts in DevOps and Agile principles
● Experience of CI/CD Pipeline
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
• Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
• Create standardized tooling and templates for development teams to create CI/CD pipelines
• Ensure infrastructure is created and maintained using terraform
• Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
• Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
• Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
About us
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
· Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
· Create standardized tooling and templates for development teams to create CI/CD pipelines
· Ensure infrastructure is created and maintained using terraform
· Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
· Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
· Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
• Make technical and product decisions based on the roadmap autonomously
• Contribute to team and organizational improvements in process and infrastructure
• Build new features, bug fix and suggest projects that will improve productivity and
infrastructure
• Code, test and operate React.js-based services
• Debug production issues across services and multiple levels of the stack
Requirements
• Education: Engineering Graduate/ Postgraduate in Computer Science
• Experience: 2-5 Years of Experience as a Front end Developer and at least 2 years of
React.js experience
Must to Have
• 2+ years of UI development
• 2+ years of React.js development
• Knowledge of GraphQL and TypeScript
• Experience with Node.js
• Sass, Git, Linux
• Knowledge of Microservice architecture
• Knowledge of Docker
• Experience with AWS, Azure or Google Cloud
• Good understanding of MySQL/MariaDB
• Understanding of Non-relational Databases MongoDB, Redis
• Insightful experience in various web design and technologies HTML, CSS, Ajax and
jQuery.
• Must have Analytical and Debugging Skill
at Leverage Edu
Responsibilities:
- Developing high-performance applications by writing testable, reusable, and efficient code.
- Implementing effective security protocols, data protection measures, and storage solutions.
- Recommending and implementing improvements to processes and technologies.
- Designing customer-facing UI and back-end services for various business processes.
Requirements:
- 1-8 years of appropriate technical experience
- Proven experience as a Full Stack Developer or similar role preferably using MEANMERN stack.
- Good understanding of client-side and server-side architecture
- Excellent interpersonal skills and professional approach
- Ability to work in a dynamic, fast moving and growing environment
- Be determined and willing to learn new technologies
- Immediate joiners are preferred
- Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
- Experience in Programming Language: Java 8, Javascript
- Experience in Microservice Development or Architecture
- Experience with Web Application Frameworks: Spring or Springboot or Micronaut
- Designing: High Level/Low-Level Design
- Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
- Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
- Experience on one or more Database: RDBMS or NoSQL
- Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
- Security (Authentication, scalability, performance monitoring)
GCP Data Analyst profile must have below skills sets :
- Knowledge of programming languages like https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.simplilearn.com%2Ftutorials%2Fsql-tutorial%2Fhow-to-become-sql-developer&data=05%7C01%7Ca_anjali%40hcl.com%7C4ae720b3f3cc45c3e04608da3346b335%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637878675987971859%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EImfaJAD1KHOyrBQ7FkbaPl1STtfnf4QdQlbjw72%2BmE%3D&reserved=0" target="_blank">SQL, Oracle, R, MATLAB, Java and https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.simplilearn.com%2Fwhy-learn-python-a-guide-to-unlock-your-python-career-article&data=05%7C01%7Ca_anjali%40hcl.com%7C4ae720b3f3cc45c3e04608da3346b335%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637878675987971859%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Z2n1Xy%2F3YN6nQqSweU5T7EfUTa1kPAAjbCMTWxDCh%2FY%3D&reserved=0" target="_blank">Python
- Data cleansing, data visualization, data wrangling
- Data modeling , data warehouse concepts
- Adapt to Big data platform like Hadoop, Spark for stream & batch processing
- GCP (Cloud Dataproc, Cloud Dataflow, Cloud Datalab, Cloud Dataprep, BigQuery, Cloud Datastore, Cloud Datafusion, Auto ML etc)
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.
Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
at VECROS TECHNOLOGIES PRIVATE LIMITED
Requirements:
- Bachelor's or master’s degree in computer science or equivalent work experience.
- A solid understanding of the full web technology stack and experience in developing web applications
- Strong experience with React JS framework for building visually appealing interfaces and back end frameworks like Django
- Knowledge of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery)
- Good knowledge of object-oriented programming and data structures in Python
- Experience with NoSQL databases, web server technologies, designing, and developing APIs.
- Strong knowledge and work experience in AWS cloud services
- Proficiency with Git, CI/CD pipelines.
- Knowledge in Agile/Scrum processes.
- Experience in Docker container usage.
Roles and responsibilities:
- Develop sophisticated web applications for drone control, data management, security, and data protection.
- Build scalable features using advanced framework concepts such as Microservices, Queues, Jobs, Events, Task Scheduling, etc.
- Integrate with Third-Party APIs/services
- Use theoretical knowledge and/or work experience to find innovative solutions to the problems at hand.
- Collaborate with team members to ideate solutions.
- Troubleshoot, debug, and upgrade existing software.
- Passionate to learn and adapt in a start-up environment.
Perks:
- Hands-on experience with state of the art facilities we use for robot development.
- Opportunity to work with industry experts and researchers in the field of AI, Computer Vision, and Machine Learning.
- Competitive salary.
- Stock options.
- Opportunity to be an early part of the team and grow with the startup.
- Freedom of working schedule.
- Opportunity to kickstart and lead your own projects.
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Mandatory:
● A minimum of 1 year of development, system design or engineering experience ●
Excellent social, communication, and technical skills
● In-depth knowledge of Linux systems
● Development experience in at least two of the following languages: Php, Go, Python,
JavaScript, C/C++, Bash
● In depth knowledge of web servers (Apache, NgNix preferred)
● Strong in using DevOps tools - Ansible, Jenkins, Docker, ELK
● Knowledge to use APM tools, NewRelic is preferred
● Ability to learn quickly, master our existing systems and identify areas of improvement
● Self-starter that enjoys and takes pride in the engineering work of their team ● Tried
and Tested Real-world Cloud Computing experience - AWS/ GCP/ Azure ● Strong
Understanding of Resilient Systems design
● Experience in Network Design and Management
Job Responsibilities
- Design, build & test ETL processes using Python & SQL for the corporate data warehouse
- Inform, influence, support, and execute our product decisions
- Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
- Evaluate and prototype new technologies in the area of data processing
- Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
- High energy level, strong team player and good work ethic
- Data analysis, understanding of business requirements and translation into logical pipelines & processes
- Identification, analysis & resolution of production & development bugs
- Support the release process including completing & reviewing documentation
- Configure data mappings & transformations to orchestrate data integration & validation
- Provide subject matter expertise
- Document solutions, tools & processes
- Create & support test plans with hands-on testing
- Peer reviews of work developed by other data engineers within the team
- Establish good working relationships & communication channels with relevant departments
Skills and Qualifications we look for
- University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
- 4 - 6 years experience with data engineering.
- Strong coding ability and software development experience in Python.
- Strong hands-on experience with SQL and Data Processing.
- Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
- Good working experience in any one of the ETL tools (Airflow would be preferable).
- Should possess strong analytical and problem solving skills.
- Good to have skills - Apache pyspark, CircleCI, Terraform
- Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
- Understanding & experience of agile / scrum delivery methodology
At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time.
SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations.
Day to day responsibilities include:
- Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
- Work on Linux/Unix OS and Multi tech application patching
- Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
- Create and modify scripts or applications to perform tasks
- Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
- Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it.
- Managing release of the sprint.
- Educating team of the best practices.
- Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
- Implementing cost effective measure on cloud and minimizing existing costs.
Skills and prerequisites
- OOPS knowledge
- Problem solving nature
- Willing to do the R&D
- Works with the team and support their queries patiently
- Bringing new things on the table - staying updated
- Pushing solution above a problem.
- Willing to learn and experiment
- Techie at heart
- Git basics
- Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
- Basic Linux handling
- Docker and orchestration (Great to have)
- Scripting – python (preferably)/bash
Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.
● Good experience with Continuous integration and deployment tools like
Jenkins, Spinnaker, etc.
● Ability to understand problems and craft maintainable solutions.
● Working cross-functionally with a broad set of business partners to understand
and integrate their API or data flow systems with Xeno, so a minimal
understanding of data and API integration is a must.
● Experience with docker and microservice based architecture using
orchestration platforms like Kubernetes.
● Understanding of Public Cloud, We use Azure and Google Cloud.
● Familiarity with web servers like Apache, nginx, etc.
● Possessing knowledge of monitoring tools such as Prometheus, Grafana, New
Relic, etc.
● Scripting in languages like Python, Golang, etc is required.
● Some knowledge of database technologies like MYSQL and Postgres is
required.
● Understanding Linux, specifically Ubuntu.
● Bonus points for knowledge and best practices related to security.
● Knowledge of Java or NodeJS would be a significant advantage.
Initially, when you join some of the projects you’d get to own are:
● Audit and improve overall security of the Infrastructure.
● Setting up different environments for different sets of teams like
QA,Development, Business etc.
- Minimum 3+ yrs of Experience in DevOps with AWS Platform
- • Strong AWS knowledge and experience
- • Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
- • Experience with IAC tools Terraform
- • Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
- • Significant experience with Linux operating system environments
- • Experience with infrastructure scripting solutions such as Python/Shell scripting
- • Must have experience in designing Infrastructure automation framework.
- • Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
- • Excellent problem-solving, Log Analysis and troubleshooting skills
- • Experience in setting up centralized logging for system (EKS, EC2) and application
- • Process-oriented with great documentation skills
- • Ability to work effectively within a team and with minimal supervision
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team
Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.
Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Our Client is an IT infrastructure services company, focused and specialized in delivering solutions and services on Microsoft products and technologies. They are a Microsoft partner and cloud solution provider. Our Client's objective is to help small, mid-sized as well as global enterprises to transform their business by using innovation in IT, adapting to the latest technologies and using IT as an enabler for business to meet business goals and continuous growth.
With focused and experienced management and a strong team of IT Infrastructure professionals, they are adding value by making IT Infrastructure a robust, agile, secure and cost-effective service to the business. As an independent IT Infrastructure company, they provide their clients with unbiased advice on how to successfully implement and manage technology to complement their business requirements.
- Providing on-call support within a high availability production environment
- Logging issues
- Providing Complex problem analysis and resolution for technical and application issues
- Supporting and collaborating with team members
- Running system updates
- Monitoring and responding to system alerts
- Developing and running system health checks
- Applying industry standard practices across the technology estate
- Performing system reviews
- Reviewing and maintaining infrastructure configuration
- Diagnosing performance issues and network bottlenecks
- Collaborating within geographically distributed teams
- Supporting software development infrastructure by continuous integration and delivery standards
- Working closely with developers and QA teams as part of a customer support centre
- Projecting delivery work, either individually or in conjunction with other teams, external suppliers or contractors
- Ensuring maintenance of the technical environments to meet current standards
- Ensuring compliance with appropriate industry and security regulations
- Providing support to Development and Customer Support teams
- Managing the hosted infrastructure through vendor engagement
- Managing 3rd party software licensing ensuring compliance
- Delivering new technologies as agreed by the business
What you need to have:
- Experience working within a technical operations environment relevant to associated skills stated.
- Be proficient in:
- Linux, zsh/ bash/ similar
- ssh, tmux/ screen/ similar
- vim/ emacs/ similar
- Computer networking
- Have a reasonable working knowledge of:
- Cloud infrastructure, Preferably GCP
- One or more programming/ scripting languages
- Git
- Docker
- Web services and web servers
- Databases, relational and NoSQL
- Some familiarity with:
- Puppet, ansible
- Terraform
- GitHub, CircleCI , Kubernetes
- Scripting language- Shell
- Databases: Cassandra, Postgres, MySQL or CloudSQL
- Agile working practices including scrum and Kanban
- Private & public cloud hosting environments
- Strong technology interests with a positive ‘can do’ attitude
- Be flexible and adaptable to changing priorities
- Be good at planning and organising their own time and able to meet targets and deadlines without supervision
- Excellent written and verbal communication skills.
- Approachable with both colleagues and team members
- Be resourceful and practical with an ability to respond positively and quickly to technical and business challenges
- Be persuasive, articulate and influential, but down to earth and friendly with own team and colleagues
- Have an ability to establish relationships quickly and to work effectively either as part of a team or singularly
- Be customer focused with both internal and external customers
- Be capable of remaining calm under pressure
- Technically minded with good problem resolution skills and systematic manner
- Excellent documentation skills
- Prepared to participate in out of hours support rota
Title : .Net Developer with Cloud
Locations: Hyderabad, Chennai, Bangalore, Pune and new Delhi (Remote).
Job Type: Full Time
.Net Job Description:
Required experience on below skills:
. Experience with MS Azure: App Service, Functions, Cosmos DB and Active Directory
· Deep understanding of C#, .NET Core, ASP.NET Web API 2, MVC
· Experience with MS SQL Server
· Strong understanding of object-oriented programming
· Experience working in an Agile environment.
· Strong understanding of code versioning tools such as Git or Subversion
· Usage of automated build and/or unit testing and continuous integration systems
· Excellent communication, presentation, influencing, and reasoning skills.
· Capable of building relationships with colleagues and key individuals.
. Must have capability of learning new technologies.
- Back-end development using Python/Django
- Front-end development using CSS, HTML and JS
- Write reusable, testable, and efficient code
- Implement security and data protection
- Use Amazon Relational Database Service
- Commit, push, pull and sync to Bitbucket, GitLab
- Deployment of code on MS Azure and AWS
- Build efficient scripts and cron jobs in GCP
- Connect apps and automate workflows using Integromat
- 3+ years of Professional Full time experience building and maintaining complex software on a cross-functional team. You'll join us in writing clean, maintainable software that solves hard problems. You'll write testable, quality code. You'll push the team and the mission forward with your contributions.
- Python and Django
- Strong database skills
- Basic systems administration
- Bachelors or Masters in Computer Science Engineering (or equivalent)
- Minimum product dev experience of 3+ years in web/mobile startups with expertise in designing and implementing high performance web applications.
- You're an incessant problem solver and tougher the problem gets, more fun you have.
- You love to own end to end responsibility, starting from defining the problem statement (either yourself or alongside your peers), development (PoC if needed), testing, releasing in staging & then production env and finally monitoring.
- Sound working knowledge of HTML, CSS and JS is an add-on
- Technical know-how of MS Azure, AWS and GCP are desirable
- Understand and keep the technical documentation up-to-date on Confluence
- Collaborate work using bug tracking and project management tools like Jira, Redmine
Responsibilities:
- Your primary responsibility as a senior backend engineer will be to architect and develop a scalable and robust micro-services backend with strong Java, Spring(Boot), SQL, AWS/GCP.
- Experience being part of a software development team in an Agile/Lean/Continuous Delivery environment
- Be a key performer in a high-performance product engineering team
Qualifications:
- 2 to 4 years of overall IT experience. Most of this experience in Java (Core Java, Spring boot, Java collections, Java Multithreading)
- Should have experience designing database schemas - SQL and NoSQL.
- Exposure to frameworks like Spring, Hibernate, Play would be a plus
- Experience with microservices architecture would be beneficial.
- Working knowledge of any public cloud (AWS, GCP or Azure)
- Broad understanding and experience of real-time analytics, NoSQL data stores, data modeling and data management, analytical tools, languages, or libraries
- Knowledge of container tech like Docker, Kubernetes would be a plus.
- Bachelor's Degree in Computer Science or Engineering.
● Research, propose and evaluate with a 5-year vision, the architecture, design, technologies,
processes and profiles related to Telco Cloud.
● Participate in the creation of a realistic technical-strategic roadmap of the network to transform
it to Telco Cloud and be prepared for 5G.
● Using your deep technical expertise, you will provide detailed feedback to Product Management
and Engineering, as well as contribute directly to the platform code base to enhance both the
Customer experience of the service, as well as the SRE quality of life.
● The individual must be aware of trends in network infrastructure as well as within the network
engineering and OSS community. What technologies are being developed or launched?
● The individual should stay current with infrastructure trends in the telco network cloud domain.
● Be responsible for the Engineering of Lab and Production Telco Cloud environments, including
patches, upgrades, and reliability and performance improvements.
Required Minimum Qualifications: (Education and Technical Skills/Knowledge)
● Software Engineering degree, MS in Computer Science or equivalent experience
● Years of experiences as an SRE, DevOps, Development and/or Support related role
● 0-5 years of professional experience for a junior position
● At least 8 years of professional experience for a senior position
● Unix server administration and tuning : Linux / RedHat / CentOS / Ubuntu
● You have deep knowledge in Networking Layers 1-4
● Cloud / Virtualization (at least two): Helm, Docker, Kubernetes, AWS, Azure, Google Cloud,
OpenStack, OpenShift, VMware vSphere / Tanzu
● You have in-depth knowledge of cloud storage solutions on top of AWS, GCP, Azure and/or
on-prem private cloud, such as Ceph, CephFS, GlusterFS
● DevOps: Jenkins, Git, Azure DevOps, Ansible, Terraform
● Backend Knowledge Bash, Python, Go (other knowledge of Scripting Language is a plus).
● PaaS Level solutions such as Keycloak for IAM, Prometheus, Grafana, ELK, DBaaS (such as MySQL,
Cassandra)
About the Organisation:
The team at Coredge.io is a combination of experienced and young professionals alike having
many years of experience in working with Edge computing, Telecom application development
and Kubernetes. The company has continuously collaborated with the open source community,
universities and major industry players in furthering its goal of providing the industry with an
indispensable tool to offer improved services to its customers. Coredge.io has a global market
presence with its offices in US and New Delhi, India.
CMML5 IT MNC from USA
We are hiring for a Lead DevOps Engineer in Cloud domain with hands on experience in Azure / GCP.
- Expertise in managing Cloud / VMWare resources and good exposure on Dockers/Kubernetes
- Working knowledge of operating systems( Unix, Linux, IBM AIX)
- Experience in installation, configuration and managing apache webserver, Tomcat/Jboss
- Good understanding of JVM, troubleshooting and performance tuning through thread dump and log analysis
-Strong expertise in Dev Ops tools:
- Deployment (Chef/Puppet/Ansible /Nebula/Nolio)
- SCM (TFS, GIT, ClearCase)
- Build tools (Ant,Maven, Make, Gradle)
- Artifact repositories (Nexes, JFrog ArtiFactory)
- CI tools (Jenkins, TeamCity),
- Experienced in scripting languages: Python, Ant, Bash and Shell
What will be required of you?
- Responsible for implementation and support of application/web server infrastructure for complex business applications
- Server configuration management, release management, deployments, automation & troubleshooting
- Set-up and configure Development, Staging, UAT and Production server environment for projects and install/configure all dependencies using the industry best practices
- Manage Code Repositories
- Manage, Document, Control and Innovate Development and Release procedure.
- Configure automated deployment on multiple environment
- Hands-on working experience of Azure or GCP.
- Knowledge Transfer the implementation to support team and until such time support any production issues