Cutshort logo
Google Cloud Platform (GCP) Jobs in Pune

50+ Google Cloud Platform (GCP) Jobs in Pune | Google Cloud Platform (GCP) Job openings in Pune

Apply to 50+ Google Cloud Platform (GCP) Jobs in Pune on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
StarApps Studio

at StarApps Studio

2 candid answers
4 products
Shreya Pillai
Posted by Shreya Pillai
Pune
7 - 14 yrs
₹30L - ₹45L / yr
React.js
SQL
Ruby on Rails (ROR)
Python
Java
+6 more

About StarApps Studio:

 

At StarApps, we are a leading SAAS company committed to delivering state-of-the-art software solutions that enhance the sales performance of Shopify merchants worldwide. Our innovative tools are trusted by over 20,000 businesses, impacting over 100 million shoppers daily. With a spirit of excellence and a commitment to quality, we are scaling new heights in the e-commerce domain. Discover more about our unique culture and values by exploring our Culture Book (Link: https://cdn.starapps.studio/files/StarApps+Culture+Book.pdf)

 

The Opportunity:

 

We are seeking an experienced Engineering Manager to spearhead our development team. Your role will be critical in overseeing code reviews, offering mentorship, ensuring our software is at the forefront of the industry, and leading our efforts to innovate and maintain a competitive edge. Your strategic insight will be key to developing software solutions that anticipate the future dynamics of the e-commerce landscape.


NOTE : E-COMMERCE OR SAAS EXPERIENCE IS MANDATORY


Key Responsibilities:

 

Technical Guidance:

  • Provide technical leadership and guidance to the development team.
  • Review and guide the team in implementing best coding practices and standards.
  • Conduct thorough code reviews and mentor team members to improve their coding skills.

 

Project Management:

  • Work closely with product managers to understand feature requirements and translate them into technical tasks.
  • Create and manage project timelines, ensuring that deadlines are met.
  • Oversee the entire software development life cycle.

 

Team Collaboration:

  • Foster a collaborative and positive team environment.
  • Coordinate with cross-functional teams (testing, merchant success, product management, etc.) to ensure smooth project execution.
  • Facilitate communication and information flow within the team.

 

Skill Development:

  • Identify skill gaps within the team and develop plans for skill enhancement.
  • Provide training opportunities and support continuous learning.

 

Process Improvement:

  • Evaluate and improve development processes for efficiency and effectiveness.
  • Implement best practices for Agile development methodologies.

 

Problem Solving:

  • Act as a point of contact for technical issues and provide solutions.
  • Troubleshoot and resolve technical challenges faced by the team.

 

What You'll Bring:

  • Proven experience as an Engineering Manager within the e-commerce or SAAS industry.
  • Expertise in our core tech stack: TypeScript, React, Ruby on Rails, Python, MySQL, and PostgreSQL.
  • Strong understanding of cloud computing services and architecture (AWS, GCP, or Azure).
  • Exceptional problem-solving skills and a keen analytical mindset.
  • Experience in agile methodologies and a track record of working in a dynamic, fast-paced environment.
  • Excellent communication and leadership skills, with the ability to inspire and motivate a high-performing team.
  • Commitment to continuous learning and improvement, with a passion for technology and e-commerce.

 

Why Join StarApps?

  • Competitive salary and benefits package.
  • Opportunity to work with a global team and world-class talent.
  • State-of-the-art professional development opportunities, including workshops, courses, and conferences.
  • Flexible vacation policy and a dedicated day for charitable activities with donation matching.
  • A culture that values creativity, innovation, and impact.

 

Join Us:

 

If you are passionate about leading a team that's at the forefront of e-commerce innovation, we would love to hear from you. Please apply with your resume and a cover letter that showcases your journey as a technical leader, your achievements, and how you can contribute to the continued success of StarApps.

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
3 - 5 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

About DeepIntent:

DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioural, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences and message them on a one-to-one basis in a privacy-compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.


What You’ll Do:

  • Establish formal data practice for the organisation.
  • Build & operate scalable and robust data architectures.
  • Create pipelines for the self-service introduction and usage of new data
  • Implement DataOps practices
  • Design, Develop, and operate Data Pipelines which support Data scientists and machine learning
  • Engineers.
  • Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy
  • to deploy and manage.
  • Collaborate with various business stakeholders, software engineers, machine learning
  • engineers, and analysts.

Who You Are:

  • Experience in designing, developing and operating configurable Data pipelines serving high
  • volume and velocity data.
  • Experience working with public clouds like GCP/AWS.
  • Good understanding of software engineering, DataOps, data architecture, Agile and
  • DevOps methodologies.
  • Experience building Data architectures that optimize performance and cost, whether the
  • components are prepackaged or homegrown
  • Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow
  • etc. and big data databases like BigQuery, Clickhouse, etc
  • Good communication skills with the ability to collaborate with both technical and non-technical
  • people.
  • Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

 

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Tekdi Technologies Pvt. Ltd.
Tekdi Recruitment
Posted by Tekdi Recruitment
Pune
5 - 11 yrs
₹7L - ₹15L / yr
Amazon Web Services (AWS)
CI/CD
SQL Azure
Google Cloud Platform (GCP)
DevOps

Key Responsibilities:

  1. Cloud Infrastructure Management: Oversee the deployment, scaling, and management of cloud infrastructure across platforms like AWS, GCP, and Azure. Ensure optimal configuration, security, and cost-effectiveness.
  2. Application Deployment and Maintenance: Responsible for deploying and maintaining web applications, particularly those built on Django and the MERN stack (MongoDB, Express.js, React, Node.js). This includes setting up CI/CD pipelines, monitoring performance, and troubleshooting.
  3. Automation and Optimization: Develop scripts and automation tools to streamline operations. Continuously seek ways to improve system efficiency and reduce downtime.
  4. Security Compliance: Ensure that all cloud deployments comply with relevant security standards and practices. Regularly conduct security audits and coordinate with security teams to address vulnerabilities.
  5. Collaboration and Support: Work closely with development teams to understand their needs and provide technical support. Act as a liaison between developers, IT staff, and management to ensure smooth operation and implementation of cloud solutions.
  6. Disaster Recovery and Backup: Implement and manage disaster recovery plans and backup strategies to ensure data integrity and availability.
  7. Performance Monitoring: Regularly monitor and report on the performance of cloud services and applications. Use data to make informed decisions about upgrades, scaling, and other changes.


Required Skills and Experience:

  • Proven experience in managing cloud infrastructure on AWS, GCP, and Azure.
  • Strong background in deploying and maintaining Django-based and MERN stack web applications.
  • Expertise in automation tools and scripting languages.
  • Solid understanding of network architecture and security protocols.
  • Experience with continuous integration and deployment (CI/CD) methodologies.
  • Excellent problem-solving abilities and a proactive approach to system optimization.
  • Good communication skills for effective collaboration with various teams.


Desired Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Relevant certifications in AWS, GCP, or Azure are highly desirable.
  • Several years of experience in a DevOps or similar role, with a focus on cloud computing and web application deployment.


Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Ramya S
Posted by Ramya S
Pune, Hyderabad, Chennai, Gurugram
3 - 5 yrs
Best in industry
Spark
PySpark
Data engineering
Big Data
Hadoop
+6 more

DATA ENGINEER – CONSULTANT


Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to

understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions or perhaps overseeing a program inception to build a new product. It could also be a software

delivery project where you're equally happy coding and tech-leading the team to implement the solution.


Job Responsibilities

• You will partner with teammates to create complex data processing pipelines to solve our clients' most complex challenges

• You will collaborate with Data Scientists to design scalable implementations of their models

• You will pair to write clean and iterative code based on TDD

• Leverage various continuous delivery practices to deploy, support and operate data pipelines

• Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

• Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

• Create data models and speak to the tradeoffs of different modelling approaches

• Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

• Assure effective collaboration between Thoughtworks and the client's teams, encouraging open communication and advocating for shared outcomes


Job Qualifications


Technical skills

• You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

• You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

• Hands-on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

• You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

• Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems

• You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments


Professional skills

• You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

• An interest in coaching, sharing your experience and knowledge with teammates

• You enjoy influencing others and always advocate for technical excellence while being open to change when needed

• Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more


Other things to know


Learning & Development


There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our

cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career

journeys.


About Thoughtworks

Thoughtworks is a global technology consultancy that integrates strategy, design and engineering to drive digital innovation. For over 30 years, our clients have trusted our autonomous teams to build solutions that look past the obvious. Here, computer science

grads come together with seasoned technologists, self-taught developers, midlife career changers and more to learn from and challenge each other. Career journeys flourish with the strength of our cultivation culture, which has won numerous awards around the world.

Read more
Arahas Technologies
Nidhi Shivane
Posted by Nidhi Shivane
Pune
3 - 8 yrs
₹10L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+3 more


Role Description

This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.

Skill Name: GCP Data Engineer

Experience: 7-10 years

Notice Period: 0-15 days

Location :-Pune

If you have a passion for data engineering and possess the following , we would love to hear from you:


🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)

🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query

🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting

🔹 Experience in the Finance/Revenue domain would be considered an added advantage

🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial


You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.

Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..


Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.

Read more
Ignite Solutions

at Ignite Solutions

6 recruiters
Meghana Dhamale
Posted by Meghana Dhamale
Remote, Pune
5 - 7 yrs
₹15L - ₹20L / yr
Python
LinkedIn
Django
Flask
Amazon Web Services (AWS)
+2 more

We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends. 

 What will you work on?

  •  Interface with clients
  • Recommend tech stacks
  • Define end-to-end logical and cloud-native architectures
  •  Define APIs
  • Integrate with 3rd party systems
  • Create architectural solution prototypes
  • Hands-on coding, team lead, code reviews, and problem-solving

What Makes You A Great Fit?

  • 5+ years of software experience 
  • Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
  • Solid expertise and hands-on experience in Python with Flask or Django
  • Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
  • Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
  • Knowledge of DevOps practices
  • Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
  • Excellent communication skills, verbal and written

The job is for a full-time position at our https://goo.gl/maps/o67FWr1aedo">Pune (Viman Nagar) office. 

(Note: We are working remotely at the moment. However, once the COVID situation improves, the candidate will be expected to work from our office.)

Read more
Higgs Boson Health Pvt Ltd
Jyoti Mahadik
Posted by Jyoti Mahadik
Pune
5 - 8 yrs
₹10L - ₹15L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
+1 more

Responsibilities:

  • Design, implement, and maintain scalable and secure infrastructure using AWS  to support the deployment and operation of our applications.
  • Develop and maintain automated build, deployment, and monitoring systems.
  • Collaborate with development teams to optimize application performance, scalability, and reliability in production environments.
  • Implement and enforce security best practices, including access controls, encryption, and vulnerability management.
  • Monitor system health, performance, and capacity, and proactively identify and resolve issues.
  • Implement and maintain containerization technologies (e.g., Docker, Kubernetes) for efficient deployment and orchestration of applications.
  • Implement and manage infrastructure-as-code (IaC) solutions, preferably Terraform, to enable efficient provisioning and management of infrastructure resources.
  • Troubleshoot and resolve production incidents, collaborating with development and operations teams to implement remediation measures.
  • Stay updated with industry trends, emerging technologies, and best practices related to DevOps and cloud infrastructure.
  • Mentor and provide guidance to junior team members, fostering a culture of knowledge sharing and continuous learning.


Qualifications:

  • Bachelor's or master's degree in Computer Science, Engineering, or a related field.
  • Proven experience as a DevOps Engineer or similar role, preferably in a senior or leadership capacity.
  • Strong understanding of AWS platform and proficiency in implementing and managing infrastructure in a cloud environment.
  • Experience with containerization technologies, such as Docker and Kubernetes, and related orchestration tools.
  • Experience with infrastructure-as-code (IaC) frameworks (e.g., Terraform, CloudFormation).
  • Strong knowledge of networking concepts and protocols, security best practices, and system monitoring and logging tools.
  • Familiarity with agile methodologies and collaboration tools (e.g., Jira, Confluence).
  • Excellent problem-solving and troubleshooting skills, with the ability to analyze complex systems and identify root causes.
  • Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams.
  • Ability to work independently and lead initiatives, as well as mentor and guide junior team members.
  • email your cv jobsathiggsbosonhealthdotcom


Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 8 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
SQL
Google Cloud Platform (GCP)
+3 more

Who We Are:

DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.

What You’ll Do:

We are looking for a Senior Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.  

This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.  

  • Serve as the Engineering interface between Analytics and Engineering teams
  • Develop and standardized all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data based decisioning
  • Optimize queries and data access efficiencies, serve as expert in how to most efficiently attain desired data points
  • Build “mastered” versions of the data for Analytics specific querying use cases
  • Help with data ETL, table performance optimization
  • Establish formal data practice for the Analytics practice in conjunction with rest of DeepIntent
  • Build & operate scalable and robust data architectures
  • Interpret analytics methodology requirements and apply to data architecture to create standardized queries and operations for use by analytics teams
  • Implement DataOps practices
  • Master existing and new Data Pipelines and develop appropriate queries to meet analytics specific objectives
  • Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts
  • Operate between Engineers and Analysts to unify both practices for analytics insight creation

Who You Are:

  • Adept in market research methodologies and using data to deliver representative insights
  • Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases
  • Deep SQL experience is a must
  • Exceptional communication skills with ability to collaborate and translate with between technical and non technical needs
  • English Language Fluency and proven success working with teams in the U.S.
  • Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data
  • Experience working with public clouds like GCP/AWS
  • Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies
  • Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
  • Proficient with SQL,Python or JVM based language, Bash
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc         
  • Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious
  • Comfortable to work in EST Time Zone


Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Hyderabad, Pune, Noida, Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
Go Programming (Golang)
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure

Golang Developer

Location: Chennai/ Hyderabad/Pune/Noida/Bangalore

Experience: 4+ years

Notice Period: Immediate/ 15 days

Job Description:

  • Must have at least 3 years of experience working with Golang.
  • Strong Cloud experience is required for day-to-day work.
  • Experience with the Go programming language is necessary.
  • Good communication skills are a plus.
  • Skills- Aws, Gcp, Azure, Golang
Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Rishabh Software
Pune, Ahmedabad, Vadodara
9 - 12 yrs
Best in industry
Microservices
Spring Boot
Amazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+8 more

An excellent opportunity to develop products SDE 3.

Rishabh Software, an India based IT service provider, focuses on cost-effective, qualitative, and timely delivered Offshore Software Development, Business Process Outsourcing (BPO) and Engineering Services.

Our Core competency lies in developing customized software solutions using web-based and client/server technology. With over 20 years of Software Development Experience working together with various domestic and international companies, we, at Rishabh Software, provide specific solutions as per the client requirements that help industries of different domains to change business problems into strategic advantages.

Product Development division is relatively new and comes with a start-up culture where long path is been and being constructed for developing reliable & scalable product/s.

Through our offices in the US (Silicon Valley), UK (London) and India (Vadodara & Bangalore) we service our global clients with qualitative and well-executed software development, BPO and Engineering services.


Please find the below JD.


Key Responsibilities

  • Responsible to interpret & map business, functional & non-functional requirements to technical specifications
  • Will be interacting with diverse stakeholders like Product Manager/Scrum master, Business Analysts, testing and other cross-functional teams as part of product  development
  • Develop solutions following established technical design, application development standards and quality processes in projects to deliver efficient, reusable, and reliable code
  • Write unit test cases for developed code as required followed by developing solutions for established technical design, application development standards and quality processes in projects to deliver efficient, reusable and reliable code
  • Perform code reviews and mentor fellow team members
  • Follow best practices to ensure the best possible performance, quality, and responsiveness of the applications
  • Assess the impacts on technical design because of the changes in functional requirements
  • Support the Technical Lead/Architect in developing artifacts such as high-level design, technical design, etc.
  • Proactively identify and communicate technical risks, issues, and challenges with mitigations
  • Manage and lead a team proactively providing guidance and mentoring as required


Technical Skills


Mandatory (Minimum 9 years of working experience)

 

  • Well-versed with Architecture and Design patterns.
  • Practice the industry's leading best guidelines/processes in building enterprise products
  • Strong experience in core Java, Spring, Spring boot, Spring Cloud, HTML, CSS, Bootstrap, Javascript, Jquery, JSON, JWT, Multi-Threading, Messaging Frameworks (Kafka, Rabbit MQ, etc.), Microservices, REST, SOAP, gRPC
  • Excellent knowledge of Relational Databases (MySQL, POSTGRES), NoSQL(Cassandra, MongoDB)), and ORM frameworks (JPA, Hibernate)
  • Knowledge of Docker, Kubernetes and containerization. Experience with cloud providers like AWS, and Azure.
  • Hands-on experience in designing and developing products using Java EE platforms, Microservices architecture
  • Experience with RESTful services as well as SOAP-based web services
  • Good knowledge of Java 8 and above with core areas like Streams, Lambdas, Functional Interfaces, Concurrency, Generics, threads, networking, IO, collections
  • Excellent knowledge & experience in microservices
  •  

Preferred 

 

  • Experience in reactive programming- Webflux, Hibernate Reactive. Knowledge of GraphQL
  • Java testing frameworks (JUnit, Mockito, TestNG etc.)
  • Knowledge of CI/CD tools (Jenkins, CruiseControl, Bamboo, etc.) and DevOps
  • Knowledge of build tools (Ant, Maven, Gradle, etc.)
  • Knowledge of BPMN, Rule-based Engine, Search Engine
  • Knowledge of JS framework like Angular

 

You would be part of

  • Exciting journey in building next generation enterprise products
  • Flat organisation structure
  • Enriches both domain and technical skills

 

Soft Skills

  • Good verbal and written communication skills
  • Ability to collaborate and work effectively in a
  • Excellent analytical and logical skills

Education

  • Preferred: Graduate or Post Graduate with specialization related to Computer Science or IT
Read more
Pune
9 - 13 yrs
₹10L - ₹15L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
+1 more
  • Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
  • Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
  • Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
  • Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
  •             Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
  • Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
  • Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
  • •             Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
  • •             Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
  • •             Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
  • Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
Read more
MNC

at MNC

Agency job
via Bohiyaanam Talent Solutions by Harsha Manglani
Pune
6 - 9 yrs
₹1L - ₹25L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Google Cloud Platform (GCP)

We are hiring for Devops Engineer for a reputed MNC

 

Job Description:

Total exp- 6+Years

Must have:

Minimum 3-4 years hands-on experience in Kubernetes and Docker

Proficiency in AWS Cloud

Good to have Kubernetes admin certification

 

Job Responsibilities:

Responsible for managing Kubernetes cluster

Deploying infrastructure for the project

Build CICD pipeline

 

Looking for Immediate Joiners only

Location: Pune

Salary: As per market standards

Mode: Work from office



 
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
marsdevs.com
Vishvajit Pathak
Posted by Vishvajit Pathak
Remote, Pune
2 - 5 yrs
₹4L - ₹15L / yr
Django
Flask
FastAPI
Python
Docker
+3 more

We are having an immediate requirement for a Python web developer.

 

You have:

  • At least 2 years of experience developing web applications with Django/Flask/FastAPI
  • Familiarity with Linux
  • Experience in both SQL and NoSQL databases.
  • Uses Docker and CI/CD
  • Writes tests
  • Experienced in application deployments and scaling them on AWS or GCP

 

You are:

  • Eager to work independently without being watched
  • Easy going.
  • Able to handle clients on your own

 

Location: Remote (in India)

 

Read more
Anetcorp Ind Pvt Ltd
Jyoti Yadav
Posted by Jyoti Yadav
Remote, Pune
6 - 12 yrs
₹10L - ₹25L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
+3 more
  • Essentail Skills:
    • Docker
    • Jenkins
    • Python dependency management using conda and pip
  • Base Linux System Commands, Scripting
  • Docker Container Build & Testing
    • Common knowledge of minimizing container size and layers
    • Inspecting containers for un-used / underutilized systems
    • Multiple Linux OS support for virtual system
  • Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
  • Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
  • Jenkins PIpeline
  • Github API Understanding to trigger builds, tags, releases
  • Artifactory Experience
  • Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)
Read more
DREAMS Pvt Ltd
Siddhant Malani
Posted by Siddhant Malani
Pune, Bengaluru (Bangalore)
0 - 1 yrs
₹8000 - ₹12000 / mo
Flutter
DART
User Interface (UI) Design
User Experience (UX) Design
RESTful APIs
+3 more

Role Description for the 3 month internship:-

• Create multi-platform apps for iOS & Android using Google's new Flutter development framework
• Strong OO design and programming skills in DART and SDK Framework for building Android as well as iOS Apps.
• Good expertise in Auto Layout and adding constraints programmatically
• Must have experience of Memory management, caching mechanisms., Threading and Performance tuning.
• Familiarity with RESTful APIs to connect Android & iOS applications to back-end services
• Experience with third-party libraries and APIs
• Collaborate with the team of product managers, developers, to define, design, & deploy new features & functionality
• Build software that ensures the best possible usability, performance, quality, & responsiveness of features
• Work in a team following agile development practices (Scrum)
• Proficient understanding of code versioning tools such as Git, Mercurial, or SVN, and Project Management tool (JIRA)
• Utilize your knowledge of the general mobile landscape, architectures, trends, & emerging technologies
• Get Solid understanding of full mobile development life cycle and make use of the same
• Help Develop and Deploy Good Quality UI
• Solid understanding of the full mobile development life cycle.
• Good written, verbal, organizational and interpersonal skills
• Unit-test code for robustness, including edge cases, usability, and general reliability.
• Excellent debugging and optimization skills
• Strong design, development and debugging skills.

Read more
Senwell Solutions

at Senwell Solutions

1 recruiter
Trupti Gholap
Posted by Trupti Gholap
Pune
1 - 3 yrs
₹2L - ₹7L / yr
Angular (2+)
AngularJS (1.x)
Amazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more

We are looking to hire an experienced Sr. Angular Developer to join our dynamic team. As a lead developer, you will be responsible for creating a top-level coding base using Angular best practices. To ensure success as an angular developer, you should have extensive knowledge of theoretical software engineering, be proficient in TypeScript, JavaScript, HTML, and CSS, and have excellent project management skills. Ultimately, a top-class Angular Developer can design and build a streamlined application to company specifications that perfectly meet the needs of the user.

 

Requirements:

 

  1. Bachelor’s degree in computer science, computer engineering, or similar
  2. Previous work Experience 2+ years as an Angular developer.
  3. Proficient in CSS, HTML, and writing cross-browser compatible code
  4. Experience using JavaScript & TypeScript building tools like Gulp or Grunt.
  5. Knowledge of JavaScript MV-VM/MVC frameworks including Angluar.JS / React.
  6. Excellent project management skills.

 

Responsibilities:

 

  1. Designing and developing user interfaces using Angular best practices.
  2. Adapting interface for modern internet applications using the latest front-end technologies.
  3. Writing TypeScript, JavaScript, CSS, and HTML.
  4. Developing product analysis tasks.
  5. Making complex technical and design decisions for Angular.JS projects.
  6. Developing application codes in Angular, Node.js, and Rest Web Services.
  7. Conducting performance tests.
  8. Consulting with the design team.
  9. Ensuring high performance of applications and providing support.

 

Read more
Remote, Pune, Mumbai, Bengaluru (Bangalore), Gurugram, Hyderabad
15 - 25 yrs
₹35L - ₹55L / yr
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
Architecture
Python
+5 more
  • 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
  • 15+ years of experience as a technical specialist in Customer-facing roles.
  • Ability to travel to client locations as needed (25-50%)
  • Extensive experience architecting, designing and programming applications in an AWS Cloud environment
  • Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
  • Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
  • Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
  • Agile software development expert
  • Experience with continuous integration tools (e.g. Jenkins)
  • Hands-on familiarity with CloudFormation
  • Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
  • Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
  • Strong practical application development experience on Linux and Windows-based systems
  • Extra curricula software development passion (e.g. active open source contributor)
Read more
InFoCusp

at InFoCusp

3 recruiters
Apurva Gayawal
Posted by Apurva Gayawal
Pune, Ahmedabad
3 - 7 yrs
₹7L - ₹27L / yr
Javascript
Cloud Computing
React.js
Python
Amazon Web Services (AWS)
+4 more
InFoCusp is a company working in the broad field of Computer Science, Software Engineering,
and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in
Pune.

We have worked on / are working on Software Engineering projects that touch upon making
full-fledged products. Starting from UI/UX aspects, responsive and blazing fast front-ends,
platform-specific applications (Android, iOS, web applications, desktop applications), very
large scale infrastructure, cutting edge machine learning, and deep learning (AI in general).
The projects/products have wide-ranging applications in finance, healthcare, e-commerce,
legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this
is using core concepts of computer science such as distributed systems, operating systems,
computer networks, process parallelism, cloud computing, embedded systems and the
Internet of Things.

PRIMARY RESPONSIBILITIES:
● Own the design, development, evaluation and deployment of highly-scalable software
products involving front-end and back-end development.
● Maintain quality, responsiveness and stability of the system.
● Design and develop memory-efficient, compute-optimized solutions for the
software.
● Design and administer automated testing tools and continuous integration
tools.
● Produce comprehensive and usable software documentation.
● Evaluate and make decisions on the use of new tools and technologies.
● Mentor other development engineers.

KNOWLEDGE AND SKILL REQUIREMENTS:
● Mastery of one or more back-end programming languages (Python, Java, Scala, C++
etc.)
● Proficiency in front-end programming paradigms and libraries (for example : HTML,
CSS and advanced JavaScript libraries and frameworks such as Angular, Knockout,
React). - Knowledge of automated and continuous integration testing tools (Jenkins,
Team City, Circle CI etc.)
● Proven experience of platform-level development for large-scale systems.
● Deep understanding of various database systems (MySQL, Mongo,
Cassandra).
● Ability to plan and design software system architecture.
● Development experience for mobile, browsers and desktop systems is
desired.
● Knowledge and experience of using distributed systems (Hadoop, Spark)
and cloud environments (Amazon EC2, Google Compute Engine, Microsoft
Azure).
● Experience working in agile development. Knowledge and prior experience of tools
like Jira is desired.
● Experience with version control systems (Git, Subversion or Mercurial).
Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Pooja Singh
Posted by Pooja Singh
Bengaluru (Bangalore), Mumbai, Gurugram, Noida, Hyderabad, Pune
4 - 19 yrs
₹1L - ₹15L / yr
Java
J2EE
Spring Boot
Hibernate (Java)
Microservices
+7 more
  • Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
  • Experience in Programming Language: Java 8, Javascript
  • Experience in Microservice Development or Architecture
  • Experience with Web Application Frameworks: Spring or Springboot or Micronaut
  • Designing: High Level/Low-Level Design
  • Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
  • Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
  • Experience on one or more Database: RDBMS or NoSQL
  • Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
  • Security (Authentication, scalability, performance monitoring)
Read more
Abishar Technologies

at Abishar Technologies

1 recruiter
Chandra Goswami
Posted by Chandra Goswami
Pune
6 - 10 yrs
₹8L - ₹23L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+1 more

Role and responsibilities

 

  • Expertise in AWS (Most typical Services),Docker & Kubernetes.
  • Strong Scripting knowledge, Strong Devops Automation, Good at Linux
  • Hands on with CI/CD (CircleCI preferred but any CI/CD tool will do). Strong Understanding of GitHub
  • Strong understanding of AWS networking and. Strong with Security & Certificates.

Nice-to-have skills

  • Involved in Product Engineering
Read more
Chennai, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai
9 - 16 yrs
Best in industry
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Microsoft Windows Azure
+9 more

About Company:

The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.

  • Role Overview
    • Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
  • Key Knowledge
    • 3-5+ years of experience in AWS/GCP or Azure technologies
    • Is likely certified on one or more of the major cloud platforms
    • Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
    • Ability to guide and lead internal agile teams on cloud technology
    • Background from the financial services industry or similar critical operational experience
Read more
Remote, Bengaluru (Bangalore), Chennai, Pune, Hyderabad, Mumbai
3 - 10 yrs
₹8L - ₹28L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
+3 more

Experience: 3+ years of experience in Cloud Architecture

About Company:

The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.



Cloud Architect / Lead

  • Role Overview
    • Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
  • Key Knowledge
    • 3-5+ years of experience in AWS/GCP or Azure technologies
    • Is likely certified on one or more of the major cloud platforms
    • Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
    • Ability to guide and lead internal agile teams on cloud technology
    • Background from the financial services industry or similar critical operational experience
 
Read more
Intuitive Technology Partners
Aakriti Gupta
Posted by Aakriti Gupta
Remote, Ahmedabad, Pune, Gurugram, Chennai, Bengaluru (Bangalore), india
6 - 12 yrs
Best in industry
DevOps
Kubernetes
Docker
Terraform
Linux/Unix
+10 more

Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.

Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.

Job Description :

  • Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
  • Extensive AWS/GCP Core Infrastructure skills
  • Infrastructure/ IAC Automation, Integration - Terraform
  • Kubernetes resources engineering and management
  • Experience with DevOps tools, CICD pipelines and release management
  • Good at creating documentation(runbooks, design documents, implementation plans )

Linux Experience :

  1. Namespace
  2. Virtualization
  3. Containers

 

Networking Experience

  1. Virtual networking
  2. Overlay networks
  3. Vxlans, GRE

 

Kubernetes Experience :

Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.

 

Observability                              

Experience in observability is a plus

 

Cloud automation :

Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.

 

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Sayali Kachi
Posted by Sayali Kachi
Pune, Hyderabad
2 - 6 yrs
₹3L - ₹15L / yr
Google Cloud Platform (GCP)
SQL
BQ

Datametica is looking for talented Big Query engineers

 

Total Experience - 2+ yrs.

Notice Period – 0 - 30 days

Work Location – Pune, Hyderabad

 

Job Description:

  • Sound understanding of Google Cloud Platform Should have worked on Big Query, Workflow, or Composer
  • Experience in migrating to GCP and integration projects on large-scale environments ETL technical design, development, and support
  • Good SQL skills and Unix Scripting Programming experience with Python, Java, or Spark would be desirable.
  • Experience in SOA and services-based data solutions would be advantageous

 

About the Company: 

www.datametica.com

Datametica is amongst one of the world's leading Cloud and Big Data analytics companies.

Datametica was founded in 2013 and has grown at an accelerated pace within a short span of 8 years. We are providing a broad and capable set of services that encompass a vision of success, driven by innovation and value addition that helps organizations in making strategic decisions influencing business growth.

Datametica is the global leader in migrating Legacy Data Warehouses to the Cloud. Datametica moves Data Warehouses to Cloud faster, at a lower cost, and with few errors, even running in parallel with full data validation for months.

Datametica's specialized team of Data Scientists has implemented award-winning analytical models for use cases involving both unstructured and structured data.

Datametica has earned the highest level of partnership with Google, AWS, and Microsoft, which enables Datametica to deliver successful projects for clients across industry verticals at a global level, with teams deployed in the USA, EU, and APAC.

 

Recognition:

We are gratified to be recognized as a Top 10 Big Data Global Company by CIO story.

 

If it excites you, please apply.

Read more
Pune
5 - 9 yrs
₹10L - ₹30L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
+8 more
Preferred Education & Experience: 
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Read more
Pune, Hyderabad, Gandhinagar
6 - 12 yrs
₹5L - ₹21L / yr
Amazon Web Services (AWS)
Docker
Kubernetes
DevOps
Windows Azure
+9 more

Key Skills Required:

 

·         You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services. 

·         You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers. 

·         You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.

·         You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application

 

To be the right fit, you'll need:

 

·         Expert in Cloud Services like AWS.

·         Experience in Terraform Scripting.

·         Experience in container technology like Docker and orchestration like Kubernetes.

·         Good knowledge of frameworks such as JenkinsCI/CD pipeline, Bamboo Etc.

·         Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)

Read more
Pune
5 - 8 yrs
₹10L - ₹30L / yr
Java
Python
Javascript
Scala
Docker
+5 more
 Sr. DevOps Software Engineer:
Preferred Education & Experience:
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.

• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
Read more
MNC Company - Product Based
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 9 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Python
Google Cloud Platform (GCP)
+2 more

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
Bengaluru (Bangalore), Pune, Mumbai
4 - 8 yrs
₹2L - ₹15L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+1 more

Job Description:

 

Mandatory Skills:

Should have strong working experience with Cloud technologies like AWS and Azure.

Should have strong working experience with CI/CD tools like Jenkins and Rundeck.

Must have experience with configuration management tools like Ansible.

Must have working knowledge on tools like Terraform.

Must be good at Scripting Languages like shell scripting and python.

Should be expertise in DevOps practices and should have demonstrated the ability to apply that knowledge across diverse projects and teams.

 

Preferable skills:

Experience with tools like Docker, Kubernetes, Puppet, JIRA, gitlab and Jfrog.

Experience in scripting languages like groovy.

Experience with GCP

 

Summary & Responsibilities:

 Write build pipelines and IaaC (ARM templates, terraform or cloud formation).

 Develop ansible playbooks to install and configure various products.

 Implement Jenkins and Rundeck jobs( and pipelines).

 Must be a self-starter and be able to work well in a fast paced, dynamic environment

 Work independently and resolve issues with minimal supervision.

 Strong desire to learn new technologies and techniques

 Strong communication (written / verbal ) skills

 

Qualification:

Bachelor's degree in Computer Science or equivalent.

4+ years of experience in DevOps and AWS.

2+ years of experience in Python, Shell scripting and Azure.

 

 

Read more
Pune
2 - 5 yrs
₹4L - ₹15L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+1 more

We are seeking a passionate DevOps Engineer to help create the next big thing in data analysis and search solutions.

You will join our Cloud infrastructure team supporting our developers . As a DevOps Engineer, you’ll be automating our environment setup and developing infrastructure as code to create a scalable, observable, fault-tolerant and secure environment. You’ll incorporate open source tools, automation, and Cloud Native solutions and will empower our developers with this knowledge. 

We will pair you up with world-class talent in cloud and software engineering and provide a position and environment for continuous learning.

Read more
Numerator

at Numerator

4 recruiters
Ketaki Kambale
Posted by Ketaki Kambale
Remote, Pune
5 - 10 yrs
₹10L - ₹30L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+1 more
This role requires a balance between hands-on infrastructure-as-code deployments as well as involvement in operational architecture and technology advocacy initiatives across the Numerator portfolio.
 
Responsibilities
  • Selects, develops, and evaluates local personnel to ensure the efficient operation of the function

  • Leads and mentors local DevOps team members throughout the organisation..

  • Stays current with industry standards.

  • Work across engineering team to help define scope and task assignments

  • Participate in code reviews, design work and troubleshooting across business functions, multiple teams and product groups to help communicate, document and address infrastructure issues.

  • Look for innovative ways to improve observability, monitoring of large scale systems over a variety of technologies across the Numerator organization.

  • Participate in the creation of training material, helping teams embrace a culture of DevOps with self-healing and self-service ecosystems. This includes discovery, testing and integration of third party solutions in product roadmaps.

  • Lead by example and evangelize DevOps best practices within the team and within the organization and product teams in Numerator.

 

Technical Skills

  • 2+ years of experience in cloud-based systems, in a SRE or DevOps position

  • Professional and positive approach, self-motivated, strong in building relationships, team player, dynamic, creative with the ability to work on own initiatives.

  • Excellent oral and written communication skills.

  • Availability to participate in after-hours on-call support with your fellow engineers and help improve a team’s on-call process where necessary.

  • Strong analytical and problem solving mindset combined with experience of troubleshooting large-scale systems.

  • Working knowledge of networking, operating systems and packaging/build systems ie. AWS Linux, Ubuntu, PIP and NPM, Terraform, Ansible etc.

  • Strong working knowledge of Serverless and Kubernetes based environments in AWS, Azure and Google Cloud Platform (GCP).

  • Experience in managing highly redundant data stores, file systems and services both in the cloud and on-premise including both data transfer, redundancy and cost-management.

  • Ability to quickly stand up AWS or other cloud-based platform services in isolation or within product environments to test out a variety of solutions or concepts before developing production-ready solutions with the product teams.

  • Bachelors or, Masters in Science, or Post Doctorate in Computer Science or related field, or equivalent work experience.

Read more
Pune, Mumbai
3 - 8 yrs
₹4L - ₹20L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+1 more

Required Competencies:

  • 3+ years experience in automating application and database deployments using most of the above-mentioned technologies
  • Strong experience in .NET and MS SQL
  • Ability to quickly learn and implement new tools/technologies
  • Ability to excel within an "Agile" environment
  • Infrastructure automation is a plus

Roles and Responsibilities:

  • Application Deployments - Azure DevOps YAML build pipelines and classic release pipelines, PowerShell and bash scripts, Docker containers
  • Database Deployments - DACPAC
  • SCM - BitBucket
  • Infrastucure - Windows Servers, Linux Servers, SQL Server, Azure SQL and many more Azure resources
  • Application Types - Web APIs, Web Forms, Windows Services, Task Scheduler Jobs, SQL Server Agent jobs
  • Development/Test Stack - VueJS, .NET Framework, .NET Core, Python, TypeScript, PowerBI, SSIS, SQL Server, NUnit, XUnit, Selenium, Postman, Sentry
  • Currently exploring ARM, Terraform and Pulumi for infrastructure automation
  • Automate application/database builds and deployments and write scripts to automate repetitive tasks
  • Optimize and improve existing builds/deployments
  • Deploy applications/databases to different environments
  • Setup/configure infrastructure on Azure
  • Create/merge branches in git
  • Help with debugging post-deployment issues
  • Managing access to BitBucket, Sentry, VMs and Azure resources
Read more
Horizontal Integration
Remote, Bengaluru (Bangalore), Hyderabad, Vadodara, Pune, Jaipur, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 15 yrs
₹10L - ₹25L / yr
Amazon Web Services (AWS)
Windows Azure
Microsoft Windows Azure
Google Cloud Platform (GCP)
Docker
+2 more

Position Summary

DevOps is a Department of Horizontal Digital, within which we have 3 different practices.

  1. Cloud Engineering
  2. Build and Release
  3. Managed Services

This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.

We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.

Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.

So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
 

Key Responsibilities:

  • This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
  • Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
  • Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
  • Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
  • Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.

Requirements:

  • Bachelor’s degree in computer science or equivalent qualification.
  • Total work experience of 6 to 8 Years.
  • Total migration experience of 4 to 6 Years.
  • Multiple Cloud Background (Azure/AWS/GCP)
  • Implementation knowledge of VMs, Vnet,
  • Know-how of Cloud Readiness and Assessment
  • Good Understanding of 6 R's of Migration.
  • Detailed understanding of the cloud offerings
  • Ability to Assess and perform discovery independently for any cloud migration.
  • Working Exp. on Containers and Kubernetes.
  • Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
  • Understanding on vSphere and Hyper-V Virtualization.
  • Working experience with Active Directory.
  • Working experience with AWS Cloud formation/Terraform templates.
  • Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
  • Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
  • High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
  • Candidates with AWS/Azure/GCP Certifications will be preferred.
Read more
Searce Inc

at Searce Inc

64 recruiters
Yashodatta Deshapnde
Posted by Yashodatta Deshapnde
Pune, Noida, Bengaluru (Bangalore), Mumbai, Chennai
3 - 10 yrs
₹5L - ₹20L / yr
DevOps
Kubernetes
Google Cloud Platform (GCP)
Terraform
Jenkins
+2 more
Role & Responsibilities :
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team

Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.

Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Read more
Yojito Software Private Limited
Tushar Khairnar
Posted by Tushar Khairnar
Pune
1 - 4 yrs
₹4L - ₹8L / yr
DevOps
Docker
Kubernetes
Python
SQL
+4 more

We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.

 

You will be responsible for

  • Leading the DevOps strategy and development of SAAS Product Deployments
  • Leading and mentoring other computer programmers.
  • Evaluating student work and providing guidance in the online courses in programming and cloud computing.

 

Desired experience/skills

Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.

 

Skills:

  • Strong programming skills in Python, SQL,
  • Cloud Computing

 

Experience:

2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.

 

Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.

 

Soft Skills:

  • Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
  • A strong understanding of algorithms and data structures, and their performance characteristics.
  • Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
  • Detail oriented and well organized.   
Read more
Zeni

at Zeni

2 recruiters
Sunil Chandurkar
Posted by Sunil Chandurkar
Pune
2 - 10 yrs
₹10L - ₹60L / yr
Python
RESTful APIs
Java
Google Cloud Platform (GCP)
Web Development
+1 more

We are looking for people that take quality as a point of pride. You will be a key member of the engineering staff working on our innovative FinTech product that simplifies the domain of finance management.

 

At Zeni.ai, we provide an AI-powered finance team with a real-time dashboard to manage all the finance functions for startups on one platform - bookkeeping, yearly taxes, bill pay & invoicing, financial projections & budgeting, employee reimbursements and more.  We are headquartered at Palo Alto, California plus engineering lab in Pune too.  The founders of Zeni are Snehal Shinde and Swapnil Shinde (Twins), they are serial entrepreneurs and Zeni is their third startup.  Before Zeni, they built Mezi.com that they sold to American Express at $120 million in merely two years.  Zeni is very well funded too and it can be disclosed when we talk.

 

The details about this position are as below:

Responsibilities:

  • You must be or like to be a Jack of all
  • Design and build fault-tolerant, high-performance, scalable systems
  • Design and maintain the core software components that support Zeni platform
  • Improve the scalability, resilience, observe ability, and efficiency of our core systems
  • Code using primarily Python.
  • Work closely with, and incorporate feedback from, product management, platform architects and senior engineers.
  • Fail fast, fix fast. Rapidly fix bugs and solve the problems
  • Proactively look for ways to make Zeni platform better
  • Speed, Speed, Speed - must be a performance freak!

Requirements:

  • E. / B.Tech in Computer Science.
  • 2yrs to 5 yrs of commercial software development experience
  • You have built some impressive, non-trivial web applications by hand
  • Excellent programming skills in Python (Object Oriented is a BIG plus)
  • Google App engine experience a huge plus
  • Disciplined approach to testing and quality assurance
  • Good understanding of web technologies (HTTP, Apache) and familiarity with Unix/Linux
  • Good understanding of data structures, algorithms and design patterns
  • Great written communication and documentation abilities
  • Comfortable in a small, intense and high-growth start-up environment
  • You know and can admit when something is not great.
  • You can recognise that something you've done needs improvement
  • Past participation in Hackathorns a big plus
  • Startup experience or Product company experience is MUST.
  • Experience integrating with 3rd party APIs
  • Experience with Agile product development methodology
  • Good at maintaining servers and troubleshooting
  • Understanding of database query processing and indexing are preferred
  • Experience with OAuth
  • Experience with Google Cloud and/or Google App Engine platforms
  • Experience writing unit tests
  • Experience with distributed version control systems (eg: Git)
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Sibros

at Sibros

1 recruiter
Alisha Rodrigues
Posted by Alisha Rodrigues
Pune
6 - 15 yrs
₹10L - ₹50L / yr
Terraform
CI/CD
DevOps
IaC
Infrastructure as Code
+6 more

About the Role

  • Own the end-to-end infrastructure of Sibros Cloud
  • Define and introduce security best practices, identify gaps in infrastructure and come up with solutions
  • Design and implement tools and software to manage Sibros’ infrastructure
  • Stay hands-on, write and review code and documentation, debug and root cause issues in production environment

Minimum Qualifications

  • Experience in Infrastructure as Code (IaC) to manage multi-cloud environments using cloud agnostic tools like Terraform or Ansible
  • Passionate about security and have good understanding of industry best practices
  • Experience in programming languages like Python, Golang, and enjoying automating everything using code
  • Good skills and intuition on root cause issues in production environment

Preferred Qualifications

  • Experience in database and network management
  • Experience in defining security policies and best practices
  • Experience in managing a large scale multi cloud environment
  • Knowledge of SOC, GDPR or ISO 27001 security compliance standards is a plus

Equal Employment Opportunity

Sibros is committed to a policy of equal employment opportunity. We recruit, employ, train, compensate, and promote without regard to race, color, age, sex, ancestry, marital status, religion, national origin, disability, sexual orientation, veteran status, present or past history of mental disability, genetic information or any other classification protected by state or federal law.

Read more
Aureus Tech Systems

at Aureus Tech Systems

3 recruiters
Krishna Kanth
Posted by Krishna Kanth
Hyderabad, Bengaluru (Bangalore), Chennai, Visakhapatnam, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 14 yrs
₹18L - ₹25L / yr
.NET
C#
ASP.NET
Web API
LINQ
+3 more

Title : .Net Developer with Cloud 

Locations: Hyderabad, Chennai, Bangalore, Pune and new Delhi (Remote).

Job Type: Full Time


.Net Job Description:

Required experience on below skills:

Azure experienced (Mandatory)
.Net programming (Mandatory)
DevSecOps capabilities (Desired)
Scripting skills (Desired)
Docker (Desired)
Data lake management (Desired)
  . Minimum of 5+ years application development experience 

. Experience with MS Azure: App Service, Functions, Cosmos DB and Active Directory

· Deep understanding of C#, .NET Core, ASP.NET Web API 2, MVC

· Experience with MS SQL Server

· Strong understanding of object-oriented programming

· Experience working in an Agile environment.

· Strong understanding of code versioning tools such as Git or Subversion

· Usage of automated build and/or unit testing and continuous integration systems

· Excellent communication, presentation, influencing, and reasoning skills.

· Capable of building relationships with colleagues and key individuals.

. Must have capability of learning new technologies.

Edited
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
Java
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
Channel 3

at Channel 3

1 recruiter
HR Shubhangi
Posted by HR Shubhangi
Pune, Bengaluru (Bangalore)
3 - 10 yrs
₹5L - ₹12L / yr
Technical Architecture
Solution architecture
Information architecture
Architecture
PowerBI
+6 more
Solution Architect/ DesignerYb
Data warehousing architect/ desinger
Data migration architect/ designer
Read more
Hyderabad, Bengaluru (Bangalore), Pune, Chennai
8 - 12 yrs
₹7L - ₹30L / yr
DevOps
Terraform
Docker
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
+1 more

Job Dsecription:

 

○ Develop best practices for team and also responsible for the architecture

○ solutions and documentation operations in order to meet the engineering departments quality and standards

○ Participate in production outage and handle complex issues and works towards Resolution

○ Develop custom tools and integration with existing tools to increase engineering Productivity

 

 

Required Experience and Expertise

 

○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.

○ Deep understanding of Terraform with best practices & writing TF modules.

○ Hands-on experience of GCP  and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.

○ Deep understanding of Kernel, Networking and OS fundamentals

NOTICE PERIOD - Max - 30 days

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune
12 - 20 yrs
₹20L - ₹35L / yr
Data Warehouse (DWH)
ETL
Big Data
Business Intelligence (BI)
Project Management
+1 more

Job Description

Experience : 10+ Years

Location : Pune


Job Requirements:

  • Minimum of 10+ years of experience with a proven record of increased responsibility
  • Hands on experience in design, development and managing Big Data, Cloud, Data warehousing
  • and Business Intelligence projects
  • Experience of managing projects in Big Data, Cloud, Data warehousing, Business Intelligence
  • Using open source or top of the line tools and technologies
  • Good knowledge of Dimensional Modeling
  • Experience of working with any ETL and BI Reporting tools
  • Experience of managing medium to large projects, preferably on Big Data
  • Proven experience in project planning, estimation, execution and implementation of medium to
  • large projects
  • Should be able to effectively communicate in English
  • Strong management and leadership skills, with proven ability to develop and manage client
  • relationships
  • Proven problem-solving skills from both technical and managerial perspectives
  • Attention to detail and a commitment to excellence and high standards
  • Excellent interpersonal and communication skills, both verbal and written
  • Position is remote with occasional travel to other offices, client sites, conventions, training
  • locations, etc.
  • Bachelor’s degree in Computer Science, Business\Economics, or a related field or demonstrated,
  • equivalent/practical knowledge or experience

Job Responsibilities:

  • Day to day project management, scrum and agile management including project planning, delivery
  • and execution of Big Data and
  • Primary Point of contact for customer related to all project engagements, delivery and project
  • escalations
  • Design right architecture and technology stack depending on business requirement on Cloud / Big
  • Data and BI related technologies both some on-premise and on cloud
  • Liaise with key stakeholders to define the Cloud / Big data solutions roadmap, prioritize the
  • deliverables
  • Responsible for end to end project delivery of Cloud / Big Data Solutions from project estimations,
  • project planning, resourcing and monitoring perspective
  • Drive and participate in requirements gathering workshops, estimation discussions, design
  • meetings and status review meetings
  • Support & assist the team in resolving issues during testing and when the system is in production
  • Involved in the full customer lifecycle with a goal to make customers successful and increase
  • revenue and retention
  • Interface with the offshore engineering team to solve customer issues
  • Develop programs that meet customer needs with respect to functionality, performance,
  • scalability, reliability, schedule, principles and recognized industry standards
  • Requirement analysis and documentation
  • Manage day-to-day operational aspects of a project and scope
  • Prepare for engagement reviews and quality assurance procedures
  • Visit and/or host clients to strengthen business relationships
Read more
Fast paced Startup
Pune
3 - 6 yrs
₹15L - ₹22L / yr
Big Data
Data engineering
Hadoop
Spark
Apache Hive
+6 more

ears of Exp: 3-6+ Years 
Skills: Scala, Python, Hive, Airflow, Spark

Languages: Java, Python, Shell Scripting

GCP: BigTable, DataProc,  BigQuery, GCS, Pubsub

OR
AWS: Athena, Glue, EMR, S3, Redshift

MongoDB, MySQL, Kafka

Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time 

Read more
Searce Inc

at Searce Inc

64 recruiters
Mishita Juneja
Posted by Mishita Juneja
Pune
3 - 6 yrs
₹8L - ₹14L / yr
Project Management
Project manager
Project coordination
Team building
Team Management
+8 more

Project Manager

Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.

What we believe?

  • Best practices are overrated
      • Implementing best practices can only make one n .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How we work?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

Are you the one? Quick self-discovery test:

  1. Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
  2. Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
  3. Compassion for customers: You listen more than you speak.  When you do speak, people feel the need to listen.
  4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud?’

Do you cloud?

  1. Expertise driving infrastructure engineering projects & transform initiatives from deep-tech startups to 42,000U (1,000 42U racks) loaded enterprises.
  2. Have you tandem-jumped at least 50 times taking along a trusted business having on-prem workloads to safe-land onto a public cloud with a soft touchdown?
    1. Do you understand the innards of Apache web servers on Linux and Sharepoint Server Farms alike?
    2. Do you speak CloudFormation? Terraform? Do you speak JSON? 
    3. Do you love automating everything possible leveraging Python / Powershell / Bash?
  3. Are you a voracious reader fascinated by the latest & greatest innovations on public cloud?

Introduction

We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required.

  1. This is an entrepreneurial project management position that challenges the client status quo by helping them re-imagine the art of what is possible using consulting mindset and business strategy skills.
  2. This position requires fanatic iterative improvement ability - ability to coordinate with Sales - Presales - Delivery - MS and get project delivery done. 
  3. This position is for hard-core-geek-turned--engineer-turned-Tech enthusiast -turned-Project Manager.

What we are NOT looking for: Buzzword Bozos (BB) or Certification Chasers (CC).

What we seek is an AA (Awesome Attitude to Learn, Improve & Coach).

Not just a BB or CC? Quick self-discovery test:

When was the last time you thought about how GAN (Generative Adversarial Networks) can help auto-create new building designs for an architect or new jewellery designs for a designer OR passionately convinced a friend that Google Video Intelligence API or AWS Rekognition can now auto-video-screen an applicant with 90+% confidence level almost automating 75+% efforts of a recruiter, OR leverage Google Vision API to do e-KYC? If this is what you constantly get blamed for & your friends have made a strip for you on xkcd, you are probably the right-fit for what we are looking for.

Your bucket of Undertaking :

This position will be responsible to consult with the clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other.

  1. Be the first one to experiment on new age cloud offerings, help define the best practise as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
  2. Continually augment skills and learn new tech as the technology and client needs evolve
  3. Use your experience in Google cloud platform, AWS  or Microsoft Azure to build hybrid-cloud solutions for customers.
  4. Provide leadership to project teams, and facilitates the definition of project deliverables around core Cloud based technology and methods.
  5. Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
  6. Participate in technical reviews of requirements, designs, code and other artifacts
  7. Identify and keep abreast of new technical concepts in google cloud platform

Education, Experience, etc.

  1. Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for a Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
  1. To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
  2. 3 - 5 years of experience with at least 1 - 2 years of hands-on experience in delivering Cloud Consulting(AWS/GCP/Azure) projects in a global enterprise environment.
  3. Good analytical, communication, problem solving, and learning skills.
  4. Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies.

A quick self-discovery test below:

  1. How you treat yourself & others?
    1. You listen more than you speak.  When you do speak, people feel the need to listen.
    2. You have ‘one’ life - no work life or personal life. You are the same at both places.
    3. You are generally happy and passionate about life. When shit does happen you know how to tell your heart ‘All is well’.
    4. You are compassionate to yourself, you love your work, your company, your country, and are generally a person people like to be around.
  2. How you work & live?
    1. You make difficult & complex decisions in an environment filled with lack of well defined constraints and uncertainty.
    2. You are able to admit to your team that you were shit scared while making those decisions.
    3. You are able to juggle conflicting priorities and remain composed as the client keeps on changing requirements. :)
    4. You are genuinely passionate about developing great software, learning a lot, helping others learn and  having loads of fun while doing so.
  3. What you love?
    1. You love things. You are passionate. You care for your self, family, country and Big Bang Theory (and this is a must!).
    2. You love to organize, index, and improve things around you - Yes you are Sheldon’ish’ at times and ‘Leonard’ish’ the other times.
    3. You are passionate about improving processes and you truly feel satisfied by making things better.
    4. You love Google. And AWS. And Terraform. And CI - CD pipelines. And linux.

 

Read more
Searce Inc

at Searce Inc

64 recruiters
Mishita Juneja
Posted by Mishita Juneja
Pune
3 - 6 yrs
₹8L - ₹14L / yr
DevOps
Kubernetes
Docker
Terraform
Cloud Computing
+11 more

Senior Devops Engineer



Who are we?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.

What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one n ‘average’ .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

Are you the one? Quick self-discovery test:

  1. Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
  2. Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
  3. Compassion for customers: You listen more than you speak.  When you do speak, people feel the need to listen.
  4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?

Introduction

When was the last time you thought about rebuilding your smart phone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk.

We are quite keen to meet you if:

  • You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture
  • You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people.
  • You like experimenting, taking risks and thinking big.

3 things this position is NOT about:

  1. This is NOT just a job; this is a passionate hobby for the right kind.
  2. This is NOT a boxed position. You will code, clean, test, build and recruit & energize.
  3. This is NOT a position for someone who likes to be told what needs to be done.

3 things this position IS about:

  1. Attention to detail matters.
  2. Roles, titles, ego does not matter; getting things done matters; getting things done quicker & better matters the most.
  3. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars?

Roles and Responsibilities

This is an entrepreneurial Cloud/DevOps Lead position that evolves to the Director- Cloud engineering .This position requires fanatic iterative improvement ability - architect a solution, code, research, understand customer needs, research more, rebuild and re-architect, you get the drift. We are seeking hard-core-geeks-turned-successful-techies who are interested in seeing their work used by millions of users the world over.


Responsibilities:

  • Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies
  • Design, deploy and maintain Cloud infrastructure for Clients – Domestic & International
  • Develop tools and automation to make platform operations more efficient, reliable and reproducible
  • Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure
  • Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools
  • Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners
  • Take initiatives to lead, drive and solve during challenging scenarios

Requirements:

  • 3 + Years of experience in Cloud Infrastructure and Operations domains
  • Experience with Linux systems, RHEL/CentOS preferred
  • Specialize in one or two cloud deployment platforms: AWS, GCP, Azure
  • Hands on experience with AWS services (EC2, VPC, RDS, DynamoDB, Lambda)
  • Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
  • Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
  • Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef
  • Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
  • Deep experience in customer facing roles with a proven track record of effective verbal and written communications
  • Dependable and good team player
  • Desire to learn and work with new technologies

Key Success Factors

  • Are you
    • Likely to forget to eat, drink or pee when you are coding?
    • Willing to learn, re-learn, research, break, fix, build, re-build and deliver awesome code to solve real business/consumer needs?
    • An open source enthusiast?
  • Absolutely technology agnostic and believe that business processes define and dictate which technology to use?
  • Ability to think on your feet, and follow-up with multiple stakeholders to get things done
  • Excellent interpersonal communication skills
  • Superior project management and organizational skills
  • Logical thought process; ability to grasp customer requirements rapidly and translate the same into technical as well as layperson terms
  • Ability to anticipate potential problems, determine and implement solutions
  • Energetic, disciplined, with a results-oriented approach
  • Strong ethics and transparency in dealings with clients, vendors, colleagues and partners
  • Attitude of ‘give me 5 sharp freshers and 6 months and I will rebuild the way people communicate over the internet.
  • You are customer-centric, and feel strongly about building scalable, secure, quality software. You thrive and succeed in delivering high quality technology products in a growth environment where priorities shift fast. 
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort